May 13 08:28:18.011814 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon May 12 23:08:12 -00 2025 May 13 08:28:18.011866 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 08:28:18.011885 kernel: BIOS-provided physical RAM map: May 13 08:28:18.011903 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 08:28:18.011916 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 08:28:18.011928 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 08:28:18.011943 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 13 08:28:18.011955 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 13 08:28:18.011968 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 08:28:18.011980 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 08:28:18.011993 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 13 08:28:18.012005 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 08:28:18.012020 kernel: NX (Execute Disable) protection: active May 13 08:28:18.012032 kernel: SMBIOS 3.0.0 present. May 13 08:28:18.012048 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 13 08:28:18.012061 kernel: Hypervisor detected: KVM May 13 08:28:18.012074 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 08:28:18.012087 kernel: kvm-clock: cpu 0, msr 3e196001, primary cpu clock May 13 08:28:18.012103 kernel: kvm-clock: using sched offset of 4357743632 cycles May 13 08:28:18.012118 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 08:28:18.012132 kernel: tsc: Detected 1996.249 MHz processor May 13 08:28:18.012209 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 08:28:18.012225 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 08:28:18.012239 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 13 08:28:18.012253 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 08:28:18.012267 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 13 08:28:18.012281 kernel: ACPI: Early table checksum verification disabled May 13 08:28:18.012298 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 13 08:28:18.012311 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 08:28:18.012325 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 08:28:18.012339 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 08:28:18.012353 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 13 08:28:18.012366 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 08:28:18.012380 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 08:28:18.012394 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 13 08:28:18.012411 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 13 08:28:18.012425 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 13 08:28:18.012438 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 13 08:28:18.012452 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 13 08:28:18.012465 kernel: No NUMA configuration found May 13 08:28:18.012485 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 13 08:28:18.012499 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] May 13 08:28:18.012516 kernel: Zone ranges: May 13 08:28:18.012530 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 08:28:18.012544 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 13 08:28:18.012558 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 13 08:28:18.012573 kernel: Movable zone start for each node May 13 08:28:18.012621 kernel: Early memory node ranges May 13 08:28:18.012637 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 08:28:18.012652 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 13 08:28:18.012669 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 13 08:28:18.012680 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 13 08:28:18.012690 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 08:28:18.012700 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 08:28:18.012710 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 13 08:28:18.012720 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 08:28:18.012728 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 08:28:18.012737 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 08:28:18.012745 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 08:28:18.012755 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 08:28:18.012763 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 08:28:18.012771 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 08:28:18.012779 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 08:28:18.012787 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 08:28:18.012795 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 13 08:28:18.012804 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 13 08:28:18.012812 kernel: Booting paravirtualized kernel on KVM May 13 08:28:18.012820 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 08:28:18.012830 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 13 08:28:18.012838 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 13 08:28:18.012846 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 13 08:28:18.012854 kernel: pcpu-alloc: [0] 0 1 May 13 08:28:18.012862 kernel: kvm-guest: stealtime: cpu 0, msr 13bc1c0c0 May 13 08:28:18.012870 kernel: kvm-guest: PV spinlocks disabled, no host support May 13 08:28:18.012879 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 13 08:28:18.012887 kernel: Policy zone: Normal May 13 08:28:18.012896 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 08:28:18.012906 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 08:28:18.012915 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 08:28:18.012923 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 08:28:18.012931 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 08:28:18.012940 kernel: Memory: 3968276K/4193772K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 225236K reserved, 0K cma-reserved) May 13 08:28:18.012948 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 08:28:18.012956 kernel: ftrace: allocating 34584 entries in 136 pages May 13 08:28:18.012965 kernel: ftrace: allocated 136 pages with 2 groups May 13 08:28:18.012974 kernel: rcu: Hierarchical RCU implementation. May 13 08:28:18.012983 kernel: rcu: RCU event tracing is enabled. May 13 08:28:18.012992 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 08:28:18.013000 kernel: Rude variant of Tasks RCU enabled. May 13 08:28:18.013008 kernel: Tracing variant of Tasks RCU enabled. May 13 08:28:18.013016 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 08:28:18.013024 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 08:28:18.013032 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 13 08:28:18.013041 kernel: Console: colour VGA+ 80x25 May 13 08:28:18.013050 kernel: printk: console [tty0] enabled May 13 08:28:18.013058 kernel: printk: console [ttyS0] enabled May 13 08:28:18.013066 kernel: ACPI: Core revision 20210730 May 13 08:28:18.013075 kernel: APIC: Switch to symmetric I/O mode setup May 13 08:28:18.013083 kernel: x2apic enabled May 13 08:28:18.013091 kernel: Switched APIC routing to physical x2apic. May 13 08:28:18.013099 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 08:28:18.013107 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 08:28:18.013115 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 13 08:28:18.013125 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 13 08:28:18.013133 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 13 08:28:18.013141 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 08:28:18.013150 kernel: Spectre V2 : Mitigation: Retpolines May 13 08:28:18.013158 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 08:28:18.013166 kernel: Speculative Store Bypass: Vulnerable May 13 08:28:18.013174 kernel: x86/fpu: x87 FPU will use FXSAVE May 13 08:28:18.013182 kernel: Freeing SMP alternatives memory: 32K May 13 08:28:18.013190 kernel: pid_max: default: 32768 minimum: 301 May 13 08:28:18.013199 kernel: LSM: Security Framework initializing May 13 08:28:18.013207 kernel: SELinux: Initializing. May 13 08:28:18.013215 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 08:28:18.013224 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 08:28:18.013232 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 13 08:28:18.013241 kernel: Performance Events: AMD PMU driver. May 13 08:28:18.013254 kernel: ... version: 0 May 13 08:28:18.013264 kernel: ... bit width: 48 May 13 08:28:18.013272 kernel: ... generic registers: 4 May 13 08:28:18.013281 kernel: ... value mask: 0000ffffffffffff May 13 08:28:18.013289 kernel: ... max period: 00007fffffffffff May 13 08:28:18.013297 kernel: ... fixed-purpose events: 0 May 13 08:28:18.013307 kernel: ... event mask: 000000000000000f May 13 08:28:18.013316 kernel: signal: max sigframe size: 1440 May 13 08:28:18.013324 kernel: rcu: Hierarchical SRCU implementation. May 13 08:28:18.013333 kernel: smp: Bringing up secondary CPUs ... May 13 08:28:18.013341 kernel: x86: Booting SMP configuration: May 13 08:28:18.013351 kernel: .... node #0, CPUs: #1 May 13 08:28:18.013359 kernel: kvm-clock: cpu 1, msr 3e196041, secondary cpu clock May 13 08:28:18.013368 kernel: kvm-guest: stealtime: cpu 1, msr 13bd1c0c0 May 13 08:28:18.013377 kernel: smp: Brought up 1 node, 2 CPUs May 13 08:28:18.013385 kernel: smpboot: Max logical packages: 2 May 13 08:28:18.013393 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 13 08:28:18.013402 kernel: devtmpfs: initialized May 13 08:28:18.013410 kernel: x86/mm: Memory block size: 128MB May 13 08:28:18.013419 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 08:28:18.013429 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 08:28:18.013437 kernel: pinctrl core: initialized pinctrl subsystem May 13 08:28:18.013446 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 08:28:18.013454 kernel: audit: initializing netlink subsys (disabled) May 13 08:28:18.013463 kernel: audit: type=2000 audit(1747124896.667:1): state=initialized audit_enabled=0 res=1 May 13 08:28:18.013471 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 08:28:18.013480 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 08:28:18.013488 kernel: cpuidle: using governor menu May 13 08:28:18.013496 kernel: ACPI: bus type PCI registered May 13 08:28:18.013506 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 08:28:18.013515 kernel: dca service started, version 1.12.1 May 13 08:28:18.013523 kernel: PCI: Using configuration type 1 for base access May 13 08:28:18.013532 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 08:28:18.013540 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 13 08:28:18.013549 kernel: ACPI: Added _OSI(Module Device) May 13 08:28:18.013557 kernel: ACPI: Added _OSI(Processor Device) May 13 08:28:18.013566 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 08:28:18.013574 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 08:28:18.013584 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 13 08:28:18.013604 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 13 08:28:18.013613 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 13 08:28:18.013621 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 08:28:18.013630 kernel: ACPI: Interpreter enabled May 13 08:28:18.013638 kernel: ACPI: PM: (supports S0 S3 S5) May 13 08:28:18.013647 kernel: ACPI: Using IOAPIC for interrupt routing May 13 08:28:18.013656 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 08:28:18.013664 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 13 08:28:18.013675 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 08:28:18.013821 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 13 08:28:18.013916 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. May 13 08:28:18.013929 kernel: acpiphp: Slot [3] registered May 13 08:28:18.013938 kernel: acpiphp: Slot [4] registered May 13 08:28:18.013947 kernel: acpiphp: Slot [5] registered May 13 08:28:18.013955 kernel: acpiphp: Slot [6] registered May 13 08:28:18.013964 kernel: acpiphp: Slot [7] registered May 13 08:28:18.013975 kernel: acpiphp: Slot [8] registered May 13 08:28:18.013984 kernel: acpiphp: Slot [9] registered May 13 08:28:18.013992 kernel: acpiphp: Slot [10] registered May 13 08:28:18.014000 kernel: acpiphp: Slot [11] registered May 13 08:28:18.014009 kernel: acpiphp: Slot [12] registered May 13 08:28:18.014017 kernel: acpiphp: Slot [13] registered May 13 08:28:18.014026 kernel: acpiphp: Slot [14] registered May 13 08:28:18.014034 kernel: acpiphp: Slot [15] registered May 13 08:28:18.014043 kernel: acpiphp: Slot [16] registered May 13 08:28:18.014053 kernel: acpiphp: Slot [17] registered May 13 08:28:18.014061 kernel: acpiphp: Slot [18] registered May 13 08:28:18.014070 kernel: acpiphp: Slot [19] registered May 13 08:28:18.014078 kernel: acpiphp: Slot [20] registered May 13 08:28:18.014086 kernel: acpiphp: Slot [21] registered May 13 08:28:18.014095 kernel: acpiphp: Slot [22] registered May 13 08:28:18.014103 kernel: acpiphp: Slot [23] registered May 13 08:28:18.014112 kernel: acpiphp: Slot [24] registered May 13 08:28:18.014120 kernel: acpiphp: Slot [25] registered May 13 08:28:18.014129 kernel: acpiphp: Slot [26] registered May 13 08:28:18.014139 kernel: acpiphp: Slot [27] registered May 13 08:28:18.014147 kernel: acpiphp: Slot [28] registered May 13 08:28:18.014156 kernel: acpiphp: Slot [29] registered May 13 08:28:18.014164 kernel: acpiphp: Slot [30] registered May 13 08:28:18.014172 kernel: acpiphp: Slot [31] registered May 13 08:28:18.014181 kernel: PCI host bridge to bus 0000:00 May 13 08:28:18.014278 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 08:28:18.014359 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 08:28:18.014441 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 08:28:18.014519 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 08:28:18.014614 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 13 08:28:18.014694 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 08:28:18.014797 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 13 08:28:18.014917 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 13 08:28:18.015021 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 13 08:28:18.015111 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 13 08:28:18.015202 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 13 08:28:18.015298 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 13 08:28:18.015387 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 13 08:28:18.015473 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 13 08:28:18.015570 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 13 08:28:18.015687 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 13 08:28:18.015778 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 13 08:28:18.015878 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 13 08:28:18.015969 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 13 08:28:18.016061 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 13 08:28:18.016190 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 13 08:28:18.016282 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 13 08:28:18.016377 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 08:28:18.016475 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 13 08:28:18.016565 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 13 08:28:18.016677 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 13 08:28:18.016768 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 13 08:28:18.016857 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 13 08:28:18.016957 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 13 08:28:18.017051 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 13 08:28:18.017140 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 13 08:28:18.017229 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 13 08:28:18.017326 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 13 08:28:18.017416 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 13 08:28:18.017506 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 13 08:28:18.018664 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 13 08:28:18.018768 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 13 08:28:18.018857 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 13 08:28:18.018944 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 13 08:28:18.018958 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 08:28:18.018967 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 08:28:18.018976 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 08:28:18.018985 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 08:28:18.018998 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 13 08:28:18.019006 kernel: iommu: Default domain type: Translated May 13 08:28:18.019015 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 08:28:18.019104 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 13 08:28:18.019193 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 08:28:18.019283 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 13 08:28:18.019296 kernel: vgaarb: loaded May 13 08:28:18.019305 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 08:28:18.019314 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 08:28:18.019326 kernel: PTP clock support registered May 13 08:28:18.019334 kernel: PCI: Using ACPI for IRQ routing May 13 08:28:18.019343 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 08:28:18.019352 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 08:28:18.019360 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 13 08:28:18.019369 kernel: clocksource: Switched to clocksource kvm-clock May 13 08:28:18.019377 kernel: VFS: Disk quotas dquot_6.6.0 May 13 08:28:18.019386 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 08:28:18.019395 kernel: pnp: PnP ACPI init May 13 08:28:18.019485 kernel: pnp 00:03: [dma 2] May 13 08:28:18.019499 kernel: pnp: PnP ACPI: found 5 devices May 13 08:28:18.019508 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 08:28:18.019517 kernel: NET: Registered PF_INET protocol family May 13 08:28:18.019526 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 08:28:18.019535 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 08:28:18.019544 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 08:28:18.019553 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 08:28:18.019564 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 13 08:28:18.019572 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 08:28:18.019581 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 08:28:18.019605 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 08:28:18.019615 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 08:28:18.019623 kernel: NET: Registered PF_XDP protocol family May 13 08:28:18.019705 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 08:28:18.019781 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 08:28:18.019852 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 08:28:18.019928 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 13 08:28:18.019999 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 13 08:28:18.020083 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 13 08:28:18.020177 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 08:28:18.020262 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds May 13 08:28:18.020276 kernel: PCI: CLS 0 bytes, default 64 May 13 08:28:18.020285 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 13 08:28:18.020294 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 13 08:28:18.020306 kernel: Initialise system trusted keyrings May 13 08:28:18.020315 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 08:28:18.020324 kernel: Key type asymmetric registered May 13 08:28:18.020333 kernel: Asymmetric key parser 'x509' registered May 13 08:28:18.020341 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 08:28:18.020350 kernel: io scheduler mq-deadline registered May 13 08:28:18.020359 kernel: io scheduler kyber registered May 13 08:28:18.020367 kernel: io scheduler bfq registered May 13 08:28:18.020376 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 08:28:18.020386 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 13 08:28:18.020395 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 13 08:28:18.020404 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 13 08:28:18.020413 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 13 08:28:18.020421 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 08:28:18.020430 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 08:28:18.020438 kernel: random: crng init done May 13 08:28:18.020447 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 08:28:18.020456 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 08:28:18.020466 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 08:28:18.020475 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 08:28:18.020562 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 08:28:18.023142 kernel: rtc_cmos 00:04: registered as rtc0 May 13 08:28:18.023247 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T08:28:17 UTC (1747124897) May 13 08:28:18.023340 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 13 08:28:18.023353 kernel: NET: Registered PF_INET6 protocol family May 13 08:28:18.023363 kernel: Segment Routing with IPv6 May 13 08:28:18.023376 kernel: In-situ OAM (IOAM) with IPv6 May 13 08:28:18.023385 kernel: NET: Registered PF_PACKET protocol family May 13 08:28:18.023394 kernel: Key type dns_resolver registered May 13 08:28:18.023403 kernel: IPI shorthand broadcast: enabled May 13 08:28:18.023412 kernel: sched_clock: Marking stable (849271641, 173534401)->(1092851753, -70045711) May 13 08:28:18.023421 kernel: registered taskstats version 1 May 13 08:28:18.023430 kernel: Loading compiled-in X.509 certificates May 13 08:28:18.023439 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 52373c12592f53b0567bb941a0a0fec888191095' May 13 08:28:18.023447 kernel: Key type .fscrypt registered May 13 08:28:18.023457 kernel: Key type fscrypt-provisioning registered May 13 08:28:18.023466 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 08:28:18.023475 kernel: ima: Allocated hash algorithm: sha1 May 13 08:28:18.023484 kernel: ima: No architecture policies found May 13 08:28:18.023493 kernel: clk: Disabling unused clocks May 13 08:28:18.023501 kernel: Freeing unused kernel image (initmem) memory: 47456K May 13 08:28:18.023510 kernel: Write protecting the kernel read-only data: 28672k May 13 08:28:18.023519 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 13 08:28:18.023529 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 13 08:28:18.023538 kernel: Run /init as init process May 13 08:28:18.023546 kernel: with arguments: May 13 08:28:18.023554 kernel: /init May 13 08:28:18.023563 kernel: with environment: May 13 08:28:18.023571 kernel: HOME=/ May 13 08:28:18.023580 kernel: TERM=linux May 13 08:28:18.023605 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 08:28:18.023617 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 08:28:18.023631 systemd[1]: Detected virtualization kvm. May 13 08:28:18.023640 systemd[1]: Detected architecture x86-64. May 13 08:28:18.023650 systemd[1]: Running in initrd. May 13 08:28:18.023659 systemd[1]: No hostname configured, using default hostname. May 13 08:28:18.023668 systemd[1]: Hostname set to . May 13 08:28:18.023678 systemd[1]: Initializing machine ID from VM UUID. May 13 08:28:18.023687 systemd[1]: Queued start job for default target initrd.target. May 13 08:28:18.023698 systemd[1]: Started systemd-ask-password-console.path. May 13 08:28:18.023707 systemd[1]: Reached target cryptsetup.target. May 13 08:28:18.023717 systemd[1]: Reached target paths.target. May 13 08:28:18.023726 systemd[1]: Reached target slices.target. May 13 08:28:18.023735 systemd[1]: Reached target swap.target. May 13 08:28:18.023744 systemd[1]: Reached target timers.target. May 13 08:28:18.023753 systemd[1]: Listening on iscsid.socket. May 13 08:28:18.023763 systemd[1]: Listening on iscsiuio.socket. May 13 08:28:18.023774 systemd[1]: Listening on systemd-journald-audit.socket. May 13 08:28:18.027318 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 08:28:18.027334 systemd[1]: Listening on systemd-journald.socket. May 13 08:28:18.027344 systemd[1]: Listening on systemd-networkd.socket. May 13 08:28:18.027353 systemd[1]: Listening on systemd-udevd-control.socket. May 13 08:28:18.027362 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 08:28:18.027373 systemd[1]: Reached target sockets.target. May 13 08:28:18.027382 systemd[1]: Starting kmod-static-nodes.service... May 13 08:28:18.027391 systemd[1]: Finished network-cleanup.service. May 13 08:28:18.027400 systemd[1]: Starting systemd-fsck-usr.service... May 13 08:28:18.027409 systemd[1]: Starting systemd-journald.service... May 13 08:28:18.027418 systemd[1]: Starting systemd-modules-load.service... May 13 08:28:18.027427 systemd[1]: Starting systemd-resolved.service... May 13 08:28:18.027436 systemd[1]: Starting systemd-vconsole-setup.service... May 13 08:28:18.027446 systemd[1]: Finished kmod-static-nodes.service. May 13 08:28:18.027456 systemd[1]: Finished systemd-fsck-usr.service. May 13 08:28:18.027471 systemd-journald[185]: Journal started May 13 08:28:18.027524 systemd-journald[185]: Runtime Journal (/run/log/journal/36b3b01e868a4aee9addd7ce382efaef) is 8.0M, max 78.4M, 70.4M free. May 13 08:28:18.017974 systemd-modules-load[186]: Inserted module 'overlay' May 13 08:28:18.068343 systemd[1]: Started systemd-journald.service. May 13 08:28:18.068367 kernel: audit: type=1130 audit(1747124898.041:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.068380 kernel: audit: type=1130 audit(1747124898.057:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.027194 systemd-resolved[187]: Positive Trust Anchors: May 13 08:28:18.077692 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 08:28:18.077709 kernel: audit: type=1130 audit(1747124898.068:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.027210 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 08:28:18.027259 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 08:28:18.089923 kernel: audit: type=1130 audit(1747124898.077:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.089943 kernel: Bridge firewalling registered May 13 08:28:18.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.030120 systemd-resolved[187]: Defaulting to hostname 'linux'. May 13 08:28:18.058071 systemd[1]: Started systemd-resolved.service. May 13 08:28:18.077183 systemd[1]: Finished systemd-vconsole-setup.service. May 13 08:28:18.078027 systemd[1]: Reached target nss-lookup.target. May 13 08:28:18.081986 systemd[1]: Starting dracut-cmdline-ask.service... May 13 08:28:18.088374 systemd-modules-load[186]: Inserted module 'br_netfilter' May 13 08:28:18.089262 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 08:28:18.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.098870 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 08:28:18.104820 kernel: audit: type=1130 audit(1747124898.098:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.111253 systemd[1]: Finished dracut-cmdline-ask.service. May 13 08:28:18.118342 kernel: audit: type=1130 audit(1747124898.111:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.112893 systemd[1]: Starting dracut-cmdline.service... May 13 08:28:18.124345 dracut-cmdline[204]: dracut-dracut-053 May 13 08:28:18.127137 dracut-cmdline[204]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 08:28:18.129370 kernel: SCSI subsystem initialized May 13 08:28:18.142380 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 08:28:18.142433 kernel: device-mapper: uevent: version 1.0.3 May 13 08:28:18.144565 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 13 08:28:18.147812 systemd-modules-load[186]: Inserted module 'dm_multipath' May 13 08:28:18.148692 systemd[1]: Finished systemd-modules-load.service. May 13 08:28:18.150478 systemd[1]: Starting systemd-sysctl.service... May 13 08:28:18.157887 kernel: audit: type=1130 audit(1747124898.149:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.163167 systemd[1]: Finished systemd-sysctl.service. May 13 08:28:18.169058 kernel: audit: type=1130 audit(1747124898.163:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.198618 kernel: Loading iSCSI transport class v2.0-870. May 13 08:28:18.219620 kernel: iscsi: registered transport (tcp) May 13 08:28:18.247008 kernel: iscsi: registered transport (qla4xxx) May 13 08:28:18.247086 kernel: QLogic iSCSI HBA Driver May 13 08:28:18.304292 systemd[1]: Finished dracut-cmdline.service. May 13 08:28:18.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.307921 systemd[1]: Starting dracut-pre-udev.service... May 13 08:28:18.319546 kernel: audit: type=1130 audit(1747124898.304:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.407790 kernel: raid6: sse2x4 gen() 5857 MB/s May 13 08:28:18.425715 kernel: raid6: sse2x4 xor() 4986 MB/s May 13 08:28:18.443701 kernel: raid6: sse2x2 gen() 13690 MB/s May 13 08:28:18.461825 kernel: raid6: sse2x2 xor() 8414 MB/s May 13 08:28:18.479706 kernel: raid6: sse2x1 gen() 11084 MB/s May 13 08:28:18.502047 kernel: raid6: sse2x1 xor() 6870 MB/s May 13 08:28:18.502130 kernel: raid6: using algorithm sse2x2 gen() 13690 MB/s May 13 08:28:18.502156 kernel: raid6: .... xor() 8414 MB/s, rmw enabled May 13 08:28:18.503273 kernel: raid6: using ssse3x2 recovery algorithm May 13 08:28:18.520015 kernel: xor: measuring software checksum speed May 13 08:28:18.520113 kernel: prefetch64-sse : 18373 MB/sec May 13 08:28:18.520185 kernel: generic_sse : 15531 MB/sec May 13 08:28:18.521402 kernel: xor: using function: prefetch64-sse (18373 MB/sec) May 13 08:28:18.639684 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 13 08:28:18.655381 systemd[1]: Finished dracut-pre-udev.service. May 13 08:28:18.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.655000 audit: BPF prog-id=7 op=LOAD May 13 08:28:18.655000 audit: BPF prog-id=8 op=LOAD May 13 08:28:18.657024 systemd[1]: Starting systemd-udevd.service... May 13 08:28:18.671034 systemd-udevd[386]: Using default interface naming scheme 'v252'. May 13 08:28:18.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.675785 systemd[1]: Started systemd-udevd.service. May 13 08:28:18.682190 systemd[1]: Starting dracut-pre-trigger.service... May 13 08:28:18.707953 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation May 13 08:28:18.756074 systemd[1]: Finished dracut-pre-trigger.service. May 13 08:28:18.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.757456 systemd[1]: Starting systemd-udev-trigger.service... May 13 08:28:18.817463 systemd[1]: Finished systemd-udev-trigger.service. May 13 08:28:18.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:18.879645 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 13 08:28:18.920193 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 08:28:18.920221 kernel: GPT:17805311 != 20971519 May 13 08:28:18.920241 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 08:28:18.920300 kernel: GPT:17805311 != 20971519 May 13 08:28:18.920318 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 08:28:18.920339 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 08:28:18.921613 kernel: libata version 3.00 loaded. May 13 08:28:18.925771 kernel: ata_piix 0000:00:01.1: version 2.13 May 13 08:28:18.926851 kernel: scsi host0: ata_piix May 13 08:28:18.926996 kernel: scsi host1: ata_piix May 13 08:28:18.927114 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 13 08:28:18.927134 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 13 08:28:18.959622 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454) May 13 08:28:18.965755 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 13 08:28:18.999952 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 13 08:28:19.003218 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 13 08:28:19.003783 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 13 08:28:19.008474 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 08:28:19.009919 systemd[1]: Starting disk-uuid.service... May 13 08:28:19.021674 disk-uuid[471]: Primary Header is updated. May 13 08:28:19.021674 disk-uuid[471]: Secondary Entries is updated. May 13 08:28:19.021674 disk-uuid[471]: Secondary Header is updated. May 13 08:28:19.029207 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 08:28:19.032649 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 08:28:20.046665 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 08:28:20.048051 disk-uuid[472]: The operation has completed successfully. May 13 08:28:20.118320 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 08:28:20.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:20.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:20.118546 systemd[1]: Finished disk-uuid.service. May 13 08:28:20.134999 systemd[1]: Starting verity-setup.service... May 13 08:28:20.157733 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 13 08:28:20.253145 systemd[1]: Found device dev-mapper-usr.device. May 13 08:28:20.256192 systemd[1]: Mounting sysusr-usr.mount... May 13 08:28:20.257824 systemd[1]: Finished verity-setup.service. May 13 08:28:20.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:20.386669 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 13 08:28:20.387421 systemd[1]: Mounted sysusr-usr.mount. May 13 08:28:20.388048 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 13 08:28:20.388815 systemd[1]: Starting ignition-setup.service... May 13 08:28:20.390014 systemd[1]: Starting parse-ip-for-networkd.service... May 13 08:28:20.410331 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 08:28:20.410389 kernel: BTRFS info (device vda6): using free space tree May 13 08:28:20.410401 kernel: BTRFS info (device vda6): has skinny extents May 13 08:28:20.424844 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 08:28:20.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:20.441288 systemd[1]: Finished ignition-setup.service. May 13 08:28:20.443376 systemd[1]: Starting ignition-fetch-offline.service... May 13 08:28:20.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:20.498034 systemd[1]: Finished parse-ip-for-networkd.service. May 13 08:28:20.498000 audit: BPF prog-id=9 op=LOAD May 13 08:28:20.500119 systemd[1]: Starting systemd-networkd.service... May 13 08:28:20.539913 systemd-networkd[643]: lo: Link UP May 13 08:28:20.539924 systemd-networkd[643]: lo: Gained carrier May 13 08:28:20.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:20.540436 systemd-networkd[643]: Enumeration completed May 13 08:28:20.540512 systemd[1]: Started systemd-networkd.service. May 13 08:28:20.540742 systemd-networkd[643]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 08:28:20.541899 systemd-networkd[643]: eth0: Link UP May 13 08:28:20.541903 systemd-networkd[643]: eth0: Gained carrier May 13 08:28:20.544058 systemd[1]: Reached target network.target. May 13 08:28:20.546396 systemd[1]: Starting iscsiuio.service... May 13 08:28:20.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:20.557636 systemd[1]: Started iscsiuio.service. May 13 08:28:20.558800 systemd[1]: Starting iscsid.service... May 13 08:28:20.562261 iscsid[648]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 13 08:28:20.562261 iscsid[648]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 13 08:28:20.562261 iscsid[648]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 13 08:28:20.562261 iscsid[648]: If using hardware iscsi like qla4xxx this message can be ignored. May 13 08:28:20.562261 iscsid[648]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 13 08:28:20.562261 iscsid[648]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 13 08:28:20.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:20.564660 systemd-networkd[643]: eth0: DHCPv4 address 172.24.4.231/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 08:28:20.566222 systemd[1]: Started iscsid.service. May 13 08:28:20.571290 systemd[1]: Starting dracut-initqueue.service... May 13 08:28:20.599175 systemd[1]: Finished dracut-initqueue.service. May 13 08:28:20.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:20.599748 systemd[1]: Reached target remote-fs-pre.target. May 13 08:28:20.600277 systemd[1]: Reached target remote-cryptsetup.target. May 13 08:28:20.602139 systemd[1]: Reached target remote-fs.target. May 13 08:28:20.604751 systemd[1]: Starting dracut-pre-mount.service... May 13 08:28:20.618361 systemd[1]: Finished dracut-pre-mount.service. May 13 08:28:20.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:21.052474 ignition[575]: Ignition 2.14.0 May 13 08:28:21.052500 ignition[575]: Stage: fetch-offline May 13 08:28:21.052686 ignition[575]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:28:21.052737 ignition[575]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:28:21.055959 ignition[575]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:28:21.056301 ignition[575]: parsed url from cmdline: "" May 13 08:28:21.059201 systemd[1]: Finished ignition-fetch-offline.service. May 13 08:28:21.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:21.056314 ignition[575]: no config URL provided May 13 08:28:21.056332 ignition[575]: reading system config file "/usr/lib/ignition/user.ign" May 13 08:28:21.063652 systemd[1]: Starting ignition-fetch.service... May 13 08:28:21.056360 ignition[575]: no config at "/usr/lib/ignition/user.ign" May 13 08:28:21.056375 ignition[575]: failed to fetch config: resource requires networking May 13 08:28:21.056712 ignition[575]: Ignition finished successfully May 13 08:28:21.086687 ignition[666]: Ignition 2.14.0 May 13 08:28:21.086714 ignition[666]: Stage: fetch May 13 08:28:21.086933 ignition[666]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:28:21.086973 ignition[666]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:28:21.089423 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:28:21.089662 ignition[666]: parsed url from cmdline: "" May 13 08:28:21.089671 ignition[666]: no config URL provided May 13 08:28:21.089683 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" May 13 08:28:21.089703 ignition[666]: no config at "/usr/lib/ignition/user.ign" May 13 08:28:21.096476 ignition[666]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 13 08:28:21.096526 ignition[666]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 13 08:28:21.098850 ignition[666]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 13 08:28:21.385934 ignition[666]: GET result: OK May 13 08:28:21.386033 ignition[666]: parsing config with SHA512: f1547a1d47341ec5c47fe8afa7fdd3f434b1e3e7669952683b832daf2ac3e69cec53e6c38085fb041015c46474dce3d98fc5be50bc2012a1ac0299ce562da7ac May 13 08:28:21.402003 unknown[666]: fetched base config from "system" May 13 08:28:21.402035 unknown[666]: fetched base config from "system" May 13 08:28:21.402930 ignition[666]: fetch: fetch complete May 13 08:28:21.402049 unknown[666]: fetched user config from "openstack" May 13 08:28:21.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:21.402943 ignition[666]: fetch: fetch passed May 13 08:28:21.405899 systemd[1]: Finished ignition-fetch.service. May 13 08:28:21.403022 ignition[666]: Ignition finished successfully May 13 08:28:21.409108 systemd[1]: Starting ignition-kargs.service... May 13 08:28:21.435735 ignition[672]: Ignition 2.14.0 May 13 08:28:21.435761 ignition[672]: Stage: kargs May 13 08:28:21.436011 ignition[672]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:28:21.436053 ignition[672]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:28:21.438357 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:28:21.440558 ignition[672]: kargs: kargs passed May 13 08:28:21.440716 ignition[672]: Ignition finished successfully May 13 08:28:21.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:21.442940 systemd[1]: Finished ignition-kargs.service. May 13 08:28:21.446632 systemd[1]: Starting ignition-disks.service... May 13 08:28:21.468212 ignition[678]: Ignition 2.14.0 May 13 08:28:21.470053 ignition[678]: Stage: disks May 13 08:28:21.471671 ignition[678]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:28:21.473645 ignition[678]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:28:21.475901 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:28:21.479334 ignition[678]: disks: disks passed May 13 08:28:21.479492 ignition[678]: Ignition finished successfully May 13 08:28:21.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:21.481070 systemd[1]: Finished ignition-disks.service. May 13 08:28:21.482516 systemd[1]: Reached target initrd-root-device.target. May 13 08:28:21.484873 systemd[1]: Reached target local-fs-pre.target. May 13 08:28:21.487292 systemd[1]: Reached target local-fs.target. May 13 08:28:21.489818 systemd[1]: Reached target sysinit.target. May 13 08:28:21.492241 systemd[1]: Reached target basic.target. May 13 08:28:21.496418 systemd[1]: Starting systemd-fsck-root.service... May 13 08:28:21.546847 systemd-fsck[685]: ROOT: clean, 619/1628000 files, 124060/1617920 blocks May 13 08:28:21.557495 systemd[1]: Finished systemd-fsck-root.service. May 13 08:28:21.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:21.561220 systemd[1]: Mounting sysroot.mount... May 13 08:28:21.583658 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 13 08:28:21.584696 systemd[1]: Mounted sysroot.mount. May 13 08:28:21.587262 systemd[1]: Reached target initrd-root-fs.target. May 13 08:28:21.592716 systemd[1]: Mounting sysroot-usr.mount... May 13 08:28:21.596249 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 13 08:28:21.599391 systemd[1]: Starting flatcar-openstack-hostname.service... May 13 08:28:21.600799 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 08:28:21.600867 systemd[1]: Reached target ignition-diskful.target. May 13 08:28:21.604823 systemd[1]: Mounted sysroot-usr.mount. May 13 08:28:21.615425 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 08:28:21.620323 systemd[1]: Starting initrd-setup-root.service... May 13 08:28:21.634890 initrd-setup-root[697]: cut: /sysroot/etc/passwd: No such file or directory May 13 08:28:21.653622 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (692) May 13 08:28:21.656434 initrd-setup-root[705]: cut: /sysroot/etc/group: No such file or directory May 13 08:28:21.663505 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 08:28:21.663537 kernel: BTRFS info (device vda6): using free space tree May 13 08:28:21.663550 kernel: BTRFS info (device vda6): has skinny extents May 13 08:28:21.670182 initrd-setup-root[729]: cut: /sysroot/etc/shadow: No such file or directory May 13 08:28:21.677448 initrd-setup-root[737]: cut: /sysroot/etc/gshadow: No such file or directory May 13 08:28:21.689269 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 08:28:21.756187 systemd[1]: Finished initrd-setup-root.service. May 13 08:28:21.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:21.757791 systemd[1]: Starting ignition-mount.service... May 13 08:28:21.767291 systemd[1]: Starting sysroot-boot.service... May 13 08:28:21.782003 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 13 08:28:21.782279 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 13 08:28:21.805699 ignition[760]: INFO : Ignition 2.14.0 May 13 08:28:21.806535 ignition[760]: INFO : Stage: mount May 13 08:28:21.807164 ignition[760]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:28:21.807915 ignition[760]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:28:21.809926 ignition[760]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:28:21.811681 ignition[760]: INFO : mount: mount passed May 13 08:28:21.812255 ignition[760]: INFO : Ignition finished successfully May 13 08:28:21.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:21.813646 systemd[1]: Finished ignition-mount.service. May 13 08:28:21.828380 systemd[1]: Finished sysroot-boot.service. May 13 08:28:21.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:21.845098 coreos-metadata[691]: May 13 08:28:21.845 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 13 08:28:21.863069 coreos-metadata[691]: May 13 08:28:21.863 INFO Fetch successful May 13 08:28:21.863740 coreos-metadata[691]: May 13 08:28:21.863 INFO wrote hostname ci-3510-3-7-n-dd395c61fd.novalocal to /sysroot/etc/hostname May 13 08:28:21.869047 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 13 08:28:21.869206 systemd[1]: Finished flatcar-openstack-hostname.service. May 13 08:28:21.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:21.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:21.872326 systemd[1]: Starting ignition-files.service... May 13 08:28:21.882088 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 08:28:21.892647 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (768) May 13 08:28:21.897143 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 08:28:21.897197 kernel: BTRFS info (device vda6): using free space tree May 13 08:28:21.897218 kernel: BTRFS info (device vda6): has skinny extents May 13 08:28:21.909268 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 08:28:21.929870 ignition[787]: INFO : Ignition 2.14.0 May 13 08:28:21.931261 ignition[787]: INFO : Stage: files May 13 08:28:21.932457 ignition[787]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:28:21.933893 ignition[787]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:28:21.937873 ignition[787]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:28:21.942671 ignition[787]: DEBUG : files: compiled without relabeling support, skipping May 13 08:28:21.945488 ignition[787]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 08:28:21.947170 ignition[787]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 08:28:21.954233 ignition[787]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 08:28:21.956093 ignition[787]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 08:28:21.959553 unknown[787]: wrote ssh authorized keys file for user: core May 13 08:28:21.960856 ignition[787]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 08:28:21.962721 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 08:28:21.964347 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 08:28:21.964347 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 08:28:21.964347 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 08:28:21.964347 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 08:28:21.964347 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 08:28:21.972150 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 08:28:21.972150 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 08:28:21.972150 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 08:28:21.972150 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 08:28:21.976473 systemd-networkd[643]: eth0: Gained IPv6LL May 13 08:28:22.617555 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK May 13 08:28:24.258459 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 08:28:24.258459 ignition[787]: INFO : files: op(8): [started] processing unit "coreos-metadata-sshkeys@.service" May 13 08:28:24.258459 ignition[787]: INFO : files: op(8): [finished] processing unit "coreos-metadata-sshkeys@.service" May 13 08:28:24.258459 ignition[787]: INFO : files: op(9): [started] processing unit "containerd.service" May 13 08:28:24.293297 kernel: kauditd_printk_skb: 27 callbacks suppressed May 13 08:28:24.293337 kernel: audit: type=1130 audit(1747124904.272:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.267415 systemd[1]: Finished ignition-files.service. May 13 08:28:24.294657 ignition[787]: INFO : files: op(9): op(a): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 08:28:24.294657 ignition[787]: INFO : files: op(9): op(a): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 08:28:24.294657 ignition[787]: INFO : files: op(9): [finished] processing unit "containerd.service" May 13 08:28:24.294657 ignition[787]: INFO : files: op(b): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 13 08:28:24.294657 ignition[787]: INFO : files: op(b): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 13 08:28:24.294657 ignition[787]: INFO : files: createResultFile: createFiles: op(c): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 08:28:24.294657 ignition[787]: INFO : files: createResultFile: createFiles: op(c): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 08:28:24.294657 ignition[787]: INFO : files: files passed May 13 08:28:24.294657 ignition[787]: INFO : Ignition finished successfully May 13 08:28:24.326832 kernel: audit: type=1130 audit(1747124904.298:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.326865 kernel: audit: type=1130 audit(1747124904.315:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.326878 kernel: audit: type=1131 audit(1747124904.315:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.273807 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 13 08:28:24.289541 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 13 08:28:24.328618 initrd-setup-root-after-ignition[810]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 08:28:24.290379 systemd[1]: Starting ignition-quench.service... May 13 08:28:24.296442 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 13 08:28:24.299368 systemd[1]: Reached target ignition-complete.target. May 13 08:28:24.308259 systemd[1]: Starting initrd-parse-etc.service... May 13 08:28:24.314253 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 08:28:24.314439 systemd[1]: Finished ignition-quench.service. May 13 08:28:24.336446 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 08:28:24.336739 systemd[1]: Finished initrd-parse-etc.service. May 13 08:28:24.348143 kernel: audit: type=1130 audit(1747124904.337:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.348168 kernel: audit: type=1131 audit(1747124904.337:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.338656 systemd[1]: Reached target initrd-fs.target. May 13 08:28:24.349335 systemd[1]: Reached target initrd.target. May 13 08:28:24.351028 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 13 08:28:24.352893 systemd[1]: Starting dracut-pre-pivot.service... May 13 08:28:24.368728 systemd[1]: Finished dracut-pre-pivot.service. May 13 08:28:24.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.381633 kernel: audit: type=1130 audit(1747124904.369:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.381760 systemd[1]: Starting initrd-cleanup.service... May 13 08:28:24.396735 systemd[1]: Stopped target nss-lookup.target. May 13 08:28:24.397813 systemd[1]: Stopped target remote-cryptsetup.target. May 13 08:28:24.398926 systemd[1]: Stopped target timers.target. May 13 08:28:24.399965 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 08:28:24.400699 systemd[1]: Stopped dracut-pre-pivot.service. May 13 08:28:24.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.402934 systemd[1]: Stopped target initrd.target. May 13 08:28:24.407767 kernel: audit: type=1131 audit(1747124904.401:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.407472 systemd[1]: Stopped target basic.target. May 13 08:28:24.408757 systemd[1]: Stopped target ignition-complete.target. May 13 08:28:24.410054 systemd[1]: Stopped target ignition-diskful.target. May 13 08:28:24.411400 systemd[1]: Stopped target initrd-root-device.target. May 13 08:28:24.412817 systemd[1]: Stopped target remote-fs.target. May 13 08:28:24.414115 systemd[1]: Stopped target remote-fs-pre.target. May 13 08:28:24.415541 systemd[1]: Stopped target sysinit.target. May 13 08:28:24.416895 systemd[1]: Stopped target local-fs.target. May 13 08:28:24.418136 systemd[1]: Stopped target local-fs-pre.target. May 13 08:28:24.419402 systemd[1]: Stopped target swap.target. May 13 08:28:24.420679 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 08:28:24.420977 systemd[1]: Stopped dracut-pre-mount.service. May 13 08:28:24.427469 kernel: audit: type=1131 audit(1747124904.422:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.422913 systemd[1]: Stopped target cryptsetup.target. May 13 08:28:24.428487 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 08:28:24.428793 systemd[1]: Stopped dracut-initqueue.service. May 13 08:28:24.435764 kernel: audit: type=1131 audit(1747124904.429:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.430536 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 08:28:24.430843 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 13 08:28:24.438916 systemd[1]: ignition-files.service: Deactivated successfully. May 13 08:28:24.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.439041 systemd[1]: Stopped ignition-files.service. May 13 08:28:24.441490 systemd[1]: Stopping ignition-mount.service... May 13 08:28:24.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.442881 systemd[1]: Stopping iscsiuio.service... May 13 08:28:24.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.449670 systemd[1]: Stopping sysroot-boot.service... May 13 08:28:24.450152 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 08:28:24.450289 systemd[1]: Stopped systemd-udev-trigger.service. May 13 08:28:24.451815 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 08:28:24.458103 ignition[825]: INFO : Ignition 2.14.0 May 13 08:28:24.458103 ignition[825]: INFO : Stage: umount May 13 08:28:24.458103 ignition[825]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 08:28:24.458103 ignition[825]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 08:28:24.458103 ignition[825]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 08:28:24.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.451924 systemd[1]: Stopped dracut-pre-trigger.service. May 13 08:28:24.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.470324 ignition[825]: INFO : umount: umount passed May 13 08:28:24.470324 ignition[825]: INFO : Ignition finished successfully May 13 08:28:24.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.461681 systemd[1]: iscsiuio.service: Deactivated successfully. May 13 08:28:24.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.461799 systemd[1]: Stopped iscsiuio.service. May 13 08:28:24.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.468053 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 08:28:24.468213 systemd[1]: Stopped ignition-mount.service. May 13 08:28:24.469528 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 08:28:24.469680 systemd[1]: Stopped ignition-disks.service. May 13 08:28:24.470843 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 08:28:24.470883 systemd[1]: Stopped ignition-kargs.service. May 13 08:28:24.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.471765 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 08:28:24.471803 systemd[1]: Stopped ignition-fetch.service. May 13 08:28:24.472810 systemd[1]: Stopped target network.target. May 13 08:28:24.473767 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 08:28:24.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.473812 systemd[1]: Stopped ignition-fetch-offline.service. May 13 08:28:24.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.475486 systemd[1]: Stopped target paths.target. May 13 08:28:24.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.475982 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 08:28:24.498000 audit: BPF prog-id=6 op=UNLOAD May 13 08:28:24.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.478656 systemd[1]: Stopped systemd-ask-password-console.path. May 13 08:28:24.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.483188 systemd[1]: Stopped target slices.target. May 13 08:28:24.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.483615 systemd[1]: Stopped target sockets.target. May 13 08:28:24.484291 systemd[1]: iscsid.socket: Deactivated successfully. May 13 08:28:24.484322 systemd[1]: Closed iscsid.socket. May 13 08:28:24.484789 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 08:28:24.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.484823 systemd[1]: Closed iscsiuio.socket. May 13 08:28:24.485270 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 08:28:24.485312 systemd[1]: Stopped ignition-setup.service. May 13 08:28:24.486026 systemd[1]: Stopping systemd-networkd.service... May 13 08:28:24.486558 systemd[1]: Stopping systemd-resolved.service... May 13 08:28:24.488092 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 08:28:24.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.488696 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 08:28:24.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.488784 systemd[1]: Finished initrd-cleanup.service. May 13 08:28:24.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.492687 systemd-networkd[643]: eth0: DHCPv6 lease lost May 13 08:28:24.518000 audit: BPF prog-id=9 op=UNLOAD May 13 08:28:24.494660 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 08:28:24.494753 systemd[1]: Stopped systemd-networkd.service. May 13 08:28:24.495966 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 08:28:24.496055 systemd[1]: Stopped systemd-resolved.service. May 13 08:28:24.497253 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 08:28:24.497288 systemd[1]: Closed systemd-networkd.socket. May 13 08:28:24.498923 systemd[1]: Stopping network-cleanup.service... May 13 08:28:24.499405 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 08:28:24.499451 systemd[1]: Stopped parse-ip-for-networkd.service. May 13 08:28:24.500009 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 08:28:24.500047 systemd[1]: Stopped systemd-sysctl.service. May 13 08:28:24.500874 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 08:28:24.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.500912 systemd[1]: Stopped systemd-modules-load.service. May 13 08:28:24.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.501751 systemd[1]: Stopping systemd-udevd.service... May 13 08:28:24.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.509462 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 08:28:24.510043 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 08:28:24.510161 systemd[1]: Stopped systemd-udevd.service. May 13 08:28:24.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.512072 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 08:28:24.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.512109 systemd[1]: Closed systemd-udevd-control.socket. May 13 08:28:24.514203 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 08:28:24.514235 systemd[1]: Closed systemd-udevd-kernel.socket. May 13 08:28:24.515115 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 08:28:24.515157 systemd[1]: Stopped dracut-pre-udev.service. May 13 08:28:24.516157 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 08:28:24.516193 systemd[1]: Stopped dracut-cmdline.service. May 13 08:28:24.517104 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 08:28:24.517140 systemd[1]: Stopped dracut-cmdline-ask.service. May 13 08:28:24.518853 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 13 08:28:24.525154 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 08:28:24.525216 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 13 08:28:24.526581 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 08:28:24.526749 systemd[1]: Stopped kmod-static-nodes.service. May 13 08:28:24.527398 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 08:28:24.527436 systemd[1]: Stopped systemd-vconsole-setup.service. May 13 08:28:24.529198 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 08:28:24.529717 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 08:28:24.529797 systemd[1]: Stopped network-cleanup.service. May 13 08:28:24.530798 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 08:28:24.530866 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 13 08:28:24.736531 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 08:28:24.736791 systemd[1]: Stopped sysroot-boot.service. May 13 08:28:24.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.739794 systemd[1]: Reached target initrd-switch-root.target. May 13 08:28:24.741920 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 08:28:24.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:24.742022 systemd[1]: Stopped initrd-setup-root.service. May 13 08:28:24.745998 systemd[1]: Starting initrd-switch-root.service... May 13 08:28:24.764787 systemd[1]: Switching root. May 13 08:28:24.774000 audit: BPF prog-id=5 op=UNLOAD May 13 08:28:24.774000 audit: BPF prog-id=4 op=UNLOAD May 13 08:28:24.774000 audit: BPF prog-id=3 op=UNLOAD May 13 08:28:24.774000 audit: BPF prog-id=8 op=UNLOAD May 13 08:28:24.775000 audit: BPF prog-id=7 op=UNLOAD May 13 08:28:24.800238 iscsid[648]: iscsid shutting down. May 13 08:28:24.801696 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). May 13 08:28:24.801860 systemd-journald[185]: Journal stopped May 13 08:28:29.562995 kernel: SELinux: Class mctp_socket not defined in policy. May 13 08:28:29.563554 kernel: SELinux: Class anon_inode not defined in policy. May 13 08:28:29.563578 kernel: SELinux: the above unknown classes and permissions will be allowed May 13 08:28:29.563609 kernel: SELinux: policy capability network_peer_controls=1 May 13 08:28:29.563624 kernel: SELinux: policy capability open_perms=1 May 13 08:28:29.563637 kernel: SELinux: policy capability extended_socket_class=1 May 13 08:28:29.563654 kernel: SELinux: policy capability always_check_network=0 May 13 08:28:29.563667 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 08:28:29.563680 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 08:28:29.563693 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 08:28:29.563708 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 08:28:29.563722 systemd[1]: Successfully loaded SELinux policy in 96.406ms. May 13 08:28:29.563742 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.206ms. May 13 08:28:29.563758 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 08:28:29.563772 systemd[1]: Detected virtualization kvm. May 13 08:28:29.563787 systemd[1]: Detected architecture x86-64. May 13 08:28:29.563800 systemd[1]: Detected first boot. May 13 08:28:29.563814 systemd[1]: Hostname set to . May 13 08:28:29.563828 systemd[1]: Initializing machine ID from VM UUID. May 13 08:28:29.563842 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 13 08:28:29.563856 systemd[1]: Populated /etc with preset unit settings. May 13 08:28:29.563870 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 08:28:29.563887 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 08:28:29.563905 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 08:28:29.563921 systemd[1]: Queued start job for default target multi-user.target. May 13 08:28:29.563937 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 13 08:28:29.563951 systemd[1]: Created slice system-addon\x2dconfig.slice. May 13 08:28:29.563966 systemd[1]: Created slice system-addon\x2drun.slice. May 13 08:28:29.563979 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 13 08:28:29.563993 systemd[1]: Created slice system-getty.slice. May 13 08:28:29.564007 systemd[1]: Created slice system-modprobe.slice. May 13 08:28:29.564805 systemd[1]: Created slice system-serial\x2dgetty.slice. May 13 08:28:29.564825 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 13 08:28:29.564840 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 13 08:28:29.564856 systemd[1]: Created slice user.slice. May 13 08:28:29.564870 systemd[1]: Started systemd-ask-password-console.path. May 13 08:28:29.564885 systemd[1]: Started systemd-ask-password-wall.path. May 13 08:28:29.564899 systemd[1]: Set up automount boot.automount. May 13 08:28:29.564914 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 13 08:28:29.564928 systemd[1]: Reached target integritysetup.target. May 13 08:28:29.564942 systemd[1]: Reached target remote-cryptsetup.target. May 13 08:28:29.564958 systemd[1]: Reached target remote-fs.target. May 13 08:28:29.564972 systemd[1]: Reached target slices.target. May 13 08:28:29.564985 systemd[1]: Reached target swap.target. May 13 08:28:29.564999 systemd[1]: Reached target torcx.target. May 13 08:28:29.565013 systemd[1]: Reached target veritysetup.target. May 13 08:28:29.565027 systemd[1]: Listening on systemd-coredump.socket. May 13 08:28:29.565041 systemd[1]: Listening on systemd-initctl.socket. May 13 08:28:29.565055 systemd[1]: Listening on systemd-journald-audit.socket. May 13 08:28:29.565068 kernel: kauditd_printk_skb: 48 callbacks suppressed May 13 08:28:29.565084 kernel: audit: type=1400 audit(1747124909.301:89): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 08:28:29.565099 kernel: audit: type=1335 audit(1747124909.301:90): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 13 08:28:29.565113 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 08:28:29.565127 systemd[1]: Listening on systemd-journald.socket. May 13 08:28:29.565141 systemd[1]: Listening on systemd-networkd.socket. May 13 08:28:29.565155 systemd[1]: Listening on systemd-udevd-control.socket. May 13 08:28:29.565169 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 08:28:29.565182 systemd[1]: Listening on systemd-userdbd.socket. May 13 08:28:29.565196 systemd[1]: Mounting dev-hugepages.mount... May 13 08:28:29.565212 systemd[1]: Mounting dev-mqueue.mount... May 13 08:28:29.565228 systemd[1]: Mounting media.mount... May 13 08:28:29.565243 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:28:29.565257 systemd[1]: Mounting sys-kernel-debug.mount... May 13 08:28:29.565271 systemd[1]: Mounting sys-kernel-tracing.mount... May 13 08:28:29.565284 systemd[1]: Mounting tmp.mount... May 13 08:28:29.565298 systemd[1]: Starting flatcar-tmpfiles.service... May 13 08:28:29.565313 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 08:28:29.565327 systemd[1]: Starting kmod-static-nodes.service... May 13 08:28:29.565342 systemd[1]: Starting modprobe@configfs.service... May 13 08:28:29.565355 systemd[1]: Starting modprobe@dm_mod.service... May 13 08:28:29.565369 systemd[1]: Starting modprobe@drm.service... May 13 08:28:29.565383 systemd[1]: Starting modprobe@efi_pstore.service... May 13 08:28:29.565398 systemd[1]: Starting modprobe@fuse.service... May 13 08:28:29.565412 systemd[1]: Starting modprobe@loop.service... May 13 08:28:29.565426 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 08:28:29.565441 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 13 08:28:29.565457 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 13 08:28:29.565470 systemd[1]: Starting systemd-journald.service... May 13 08:28:29.565484 systemd[1]: Starting systemd-modules-load.service... May 13 08:28:29.565498 systemd[1]: Starting systemd-network-generator.service... May 13 08:28:29.565512 systemd[1]: Starting systemd-remount-fs.service... May 13 08:28:29.565526 systemd[1]: Starting systemd-udev-trigger.service... May 13 08:28:29.565540 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:28:29.565554 systemd[1]: Mounted dev-hugepages.mount. May 13 08:28:29.565567 systemd[1]: Mounted dev-mqueue.mount. May 13 08:28:29.565581 systemd[1]: Mounted media.mount. May 13 08:28:29.565621 systemd[1]: Mounted sys-kernel-debug.mount. May 13 08:28:29.565637 systemd[1]: Mounted sys-kernel-tracing.mount. May 13 08:28:29.565650 systemd[1]: Mounted tmp.mount. May 13 08:28:29.565664 systemd[1]: Finished kmod-static-nodes.service. May 13 08:28:29.565678 kernel: audit: type=1130 audit(1747124909.512:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.565692 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 08:28:29.565705 systemd[1]: Finished modprobe@dm_mod.service. May 13 08:28:29.565720 kernel: audit: type=1130 audit(1747124909.523:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.565736 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 08:28:29.565754 kernel: audit: type=1131 audit(1747124909.528:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.565767 systemd[1]: Finished modprobe@drm.service. May 13 08:28:29.565781 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 08:28:29.565796 kernel: audit: type=1130 audit(1747124909.539:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.565809 systemd[1]: Finished modprobe@efi_pstore.service. May 13 08:28:29.565824 systemd[1]: Finished systemd-network-generator.service. May 13 08:28:29.565837 kernel: audit: type=1131 audit(1747124909.539:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.565853 systemd[1]: Finished systemd-remount-fs.service. May 13 08:28:29.565867 systemd[1]: Reached target network-pre.target. May 13 08:28:29.565881 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 08:28:29.565896 kernel: audit: type=1130 audit(1747124909.548:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.565915 systemd-journald[955]: Journal started May 13 08:28:29.565968 systemd-journald[955]: Runtime Journal (/run/log/journal/36b3b01e868a4aee9addd7ce382efaef) is 8.0M, max 78.4M, 70.4M free. May 13 08:28:29.301000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 08:28:29.301000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 13 08:28:29.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.575827 kernel: loop: module loaded May 13 08:28:29.575867 kernel: audit: type=1131 audit(1747124909.548:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.575887 kernel: audit: type=1130 audit(1747124909.556:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.560000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 08:28:29.560000 audit[955]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff130f8fa0 a2=4000 a3=7fff130f903c items=0 ppid=1 pid=955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:28:29.560000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 13 08:28:29.598618 systemd[1]: Starting systemd-hwdb-update.service... May 13 08:28:29.602615 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 08:28:29.610617 systemd[1]: Starting systemd-random-seed.service... May 13 08:28:29.613169 systemd[1]: Started systemd-journald.service. May 13 08:28:29.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.614461 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 08:28:29.614668 systemd[1]: Finished modprobe@configfs.service. May 13 08:28:29.615869 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 08:28:29.616015 systemd[1]: Finished modprobe@loop.service. May 13 08:28:29.619061 systemd[1]: Mounting sys-kernel-config.mount... May 13 08:28:29.622644 systemd[1]: Starting systemd-journal-flush.service... May 13 08:28:29.623170 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 08:28:29.624382 systemd[1]: Mounted sys-kernel-config.mount. May 13 08:28:29.635624 kernel: fuse: init (API version 7.34) May 13 08:28:29.636430 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 08:28:29.637727 systemd[1]: Finished modprobe@fuse.service. May 13 08:28:29.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.639540 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 13 08:28:29.644051 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 13 08:28:29.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.649706 systemd[1]: Finished flatcar-tmpfiles.service. May 13 08:28:29.651783 systemd[1]: Starting systemd-sysusers.service... May 13 08:28:29.664667 systemd-journald[955]: Time spent on flushing to /var/log/journal/36b3b01e868a4aee9addd7ce382efaef is 28.578ms for 1033 entries. May 13 08:28:29.664667 systemd-journald[955]: System Journal (/var/log/journal/36b3b01e868a4aee9addd7ce382efaef) is 8.0M, max 584.8M, 576.8M free. May 13 08:28:29.807896 systemd-journald[955]: Received client request to flush runtime journal. May 13 08:28:29.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.675536 systemd[1]: Finished systemd-modules-load.service. May 13 08:28:29.677539 systemd[1]: Starting systemd-sysctl.service... May 13 08:28:29.808703 udevadm[1013]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 08:28:29.695188 systemd[1]: Finished systemd-udev-trigger.service. May 13 08:28:29.697100 systemd[1]: Starting systemd-udev-settle.service... May 13 08:28:29.729433 systemd[1]: Finished systemd-random-seed.service. May 13 08:28:29.730244 systemd[1]: Reached target first-boot-complete.target. May 13 08:28:29.797955 systemd[1]: Finished systemd-sysctl.service. May 13 08:28:29.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.810203 systemd[1]: Finished systemd-journal-flush.service. May 13 08:28:29.813757 systemd[1]: Finished systemd-sysusers.service. May 13 08:28:29.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:29.817174 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 08:28:29.877054 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 08:28:29.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:30.431360 systemd[1]: Finished systemd-hwdb-update.service. May 13 08:28:30.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:30.435318 systemd[1]: Starting systemd-udevd.service... May 13 08:28:30.476649 systemd-udevd[1024]: Using default interface naming scheme 'v252'. May 13 08:28:30.521718 systemd[1]: Started systemd-udevd.service. May 13 08:28:30.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:30.533264 systemd[1]: Starting systemd-networkd.service... May 13 08:28:30.548025 systemd[1]: Starting systemd-userdbd.service... May 13 08:28:30.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:30.587341 systemd[1]: Started systemd-userdbd.service. May 13 08:28:30.600520 systemd[1]: Found device dev-ttyS0.device. May 13 08:28:30.688082 systemd-networkd[1044]: lo: Link UP May 13 08:28:30.688092 systemd-networkd[1044]: lo: Gained carrier May 13 08:28:30.689154 systemd-networkd[1044]: Enumeration completed May 13 08:28:30.689281 systemd-networkd[1044]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 08:28:30.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:30.689761 systemd[1]: Started systemd-networkd.service. May 13 08:28:30.693492 systemd-networkd[1044]: eth0: Link UP May 13 08:28:30.693502 systemd-networkd[1044]: eth0: Gained carrier May 13 08:28:30.701761 systemd-networkd[1044]: eth0: DHCPv4 address 172.24.4.231/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 08:28:30.714645 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 08:28:30.718578 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 08:28:30.728856 kernel: ACPI: button: Power Button [PWRF] May 13 08:28:30.740000 audit[1027]: AVC avc: denied { confidentiality } for pid=1027 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 13 08:28:30.740000 audit[1027]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55dafff3c820 a1=338ac a2=7f6878743bc5 a3=5 items=110 ppid=1024 pid=1027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:28:30.740000 audit: CWD cwd="/" May 13 08:28:30.740000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=1 name=(null) inode=14083 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=2 name=(null) inode=14083 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=3 name=(null) inode=14084 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=4 name=(null) inode=14083 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=5 name=(null) inode=14085 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=6 name=(null) inode=14083 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=7 name=(null) inode=14086 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=8 name=(null) inode=14086 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=9 name=(null) inode=14087 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=10 name=(null) inode=14086 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=11 name=(null) inode=14088 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=12 name=(null) inode=14086 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=13 name=(null) inode=14089 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=14 name=(null) inode=14086 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=15 name=(null) inode=14090 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=16 name=(null) inode=14086 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=17 name=(null) inode=14091 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=18 name=(null) inode=14083 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=19 name=(null) inode=14092 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=20 name=(null) inode=14092 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=21 name=(null) inode=14093 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=22 name=(null) inode=14092 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=23 name=(null) inode=14094 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=24 name=(null) inode=14092 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=25 name=(null) inode=14095 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=26 name=(null) inode=14092 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=27 name=(null) inode=14096 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=28 name=(null) inode=14092 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=29 name=(null) inode=14097 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=30 name=(null) inode=14083 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=31 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=32 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=33 name=(null) inode=14099 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=34 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=35 name=(null) inode=14100 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=36 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=37 name=(null) inode=14101 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=38 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=39 name=(null) inode=14102 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=40 name=(null) inode=14098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=41 name=(null) inode=14103 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=42 name=(null) inode=14083 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=43 name=(null) inode=14104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=44 name=(null) inode=14104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=45 name=(null) inode=14105 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=46 name=(null) inode=14104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=47 name=(null) inode=14106 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=48 name=(null) inode=14104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=49 name=(null) inode=14107 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=50 name=(null) inode=14104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=51 name=(null) inode=14108 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=52 name=(null) inode=14104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=53 name=(null) inode=14109 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=55 name=(null) inode=14110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=56 name=(null) inode=14110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=57 name=(null) inode=14111 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=58 name=(null) inode=14110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=59 name=(null) inode=14112 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=60 name=(null) inode=14110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=61 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=62 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=63 name=(null) inode=14114 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=64 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=65 name=(null) inode=14115 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=66 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=67 name=(null) inode=14116 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=68 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=69 name=(null) inode=14117 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=70 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=71 name=(null) inode=14118 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=72 name=(null) inode=14110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=73 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=74 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=75 name=(null) inode=14120 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=76 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=77 name=(null) inode=14121 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=78 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=79 name=(null) inode=14122 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=80 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=81 name=(null) inode=14123 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=82 name=(null) inode=14119 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=83 name=(null) inode=14124 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=84 name=(null) inode=14110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=85 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=86 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=87 name=(null) inode=14126 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=88 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=89 name=(null) inode=14127 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=90 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=91 name=(null) inode=14128 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=92 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=93 name=(null) inode=14129 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=94 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=95 name=(null) inode=14130 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=96 name=(null) inode=14110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=97 name=(null) inode=14131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=98 name=(null) inode=14131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=99 name=(null) inode=14132 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=100 name=(null) inode=14131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=101 name=(null) inode=14133 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=102 name=(null) inode=14131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=103 name=(null) inode=14134 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=104 name=(null) inode=14131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=105 name=(null) inode=14135 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=106 name=(null) inode=14131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=107 name=(null) inode=14136 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PATH item=109 name=(null) inode=14137 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 08:28:30.740000 audit: PROCTITLE proctitle="(udev-worker)" May 13 08:28:30.758617 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 13 08:28:30.781625 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 08:28:30.788622 kernel: mousedev: PS/2 mouse device common for all mice May 13 08:28:30.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:30.841689 systemd[1]: Finished systemd-udev-settle.service. May 13 08:28:30.845672 systemd[1]: Starting lvm2-activation-early.service... May 13 08:28:30.883557 lvm[1059]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 08:28:30.915380 systemd[1]: Finished lvm2-activation-early.service. May 13 08:28:30.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:30.916982 systemd[1]: Reached target cryptsetup.target. May 13 08:28:30.921515 systemd[1]: Starting lvm2-activation.service... May 13 08:28:30.925215 lvm[1061]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 08:28:30.950271 systemd[1]: Finished lvm2-activation.service. May 13 08:28:30.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:30.952153 systemd[1]: Reached target local-fs-pre.target. May 13 08:28:30.953494 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 08:28:30.953877 systemd[1]: Reached target local-fs.target. May 13 08:28:30.955182 systemd[1]: Reached target machines.target. May 13 08:28:30.959276 systemd[1]: Starting ldconfig.service... May 13 08:28:30.963062 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 08:28:30.963348 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:28:30.965986 systemd[1]: Starting systemd-boot-update.service... May 13 08:28:30.969730 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 13 08:28:30.973918 systemd[1]: Starting systemd-machine-id-commit.service... May 13 08:28:30.977990 systemd[1]: Starting systemd-sysext.service... May 13 08:28:30.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:30.981997 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1064 (bootctl) May 13 08:28:30.984476 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 13 08:28:30.994521 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 13 08:28:31.017580 systemd[1]: Unmounting usr-share-oem.mount... May 13 08:28:31.022192 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 13 08:28:31.022457 systemd[1]: Unmounted usr-share-oem.mount. May 13 08:28:31.102673 kernel: loop0: detected capacity change from 0 to 210664 May 13 08:28:31.298841 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 08:28:31.300698 systemd[1]: Finished systemd-machine-id-commit.service. May 13 08:28:31.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.359642 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 08:28:31.394214 kernel: loop1: detected capacity change from 0 to 210664 May 13 08:28:31.440796 (sd-sysext)[1080]: Using extensions 'kubernetes'. May 13 08:28:31.443114 (sd-sysext)[1080]: Merged extensions into '/usr'. May 13 08:28:31.468916 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:28:31.471387 systemd[1]: Mounting usr-share-oem.mount... May 13 08:28:31.472965 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 08:28:31.474969 systemd[1]: Starting modprobe@dm_mod.service... May 13 08:28:31.477550 systemd[1]: Starting modprobe@efi_pstore.service... May 13 08:28:31.485375 systemd[1]: Starting modprobe@loop.service... May 13 08:28:31.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.488249 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 08:28:31.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.488431 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:28:31.488647 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:28:31.492875 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 08:28:31.493203 systemd[1]: Finished modprobe@dm_mod.service. May 13 08:28:31.499154 systemd-fsck[1076]: fsck.fat 4.2 (2021-01-31) May 13 08:28:31.499154 systemd-fsck[1076]: /dev/vda1: 790 files, 120692/258078 clusters May 13 08:28:31.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.501278 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 08:28:31.501669 systemd[1]: Finished modprobe@efi_pstore.service. May 13 08:28:31.503953 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 08:28:31.504361 systemd[1]: Finished modprobe@loop.service. May 13 08:28:31.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.505899 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 08:28:31.506099 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 08:28:31.515742 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 13 08:28:31.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.517845 systemd[1]: Mounted usr-share-oem.mount. May 13 08:28:31.522251 systemd[1]: Finished systemd-sysext.service. May 13 08:28:31.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.531008 systemd[1]: Mounting boot.mount... May 13 08:28:31.533434 systemd[1]: Starting ensure-sysext.service... May 13 08:28:31.535394 systemd[1]: Starting systemd-tmpfiles-setup.service... May 13 08:28:31.546676 systemd[1]: Reloading. May 13 08:28:31.551318 systemd-tmpfiles[1098]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 13 08:28:31.556935 systemd-tmpfiles[1098]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 08:28:31.560684 systemd-tmpfiles[1098]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 08:28:31.638649 /usr/lib/systemd/system-generators/torcx-generator[1121]: time="2025-05-13T08:28:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 08:28:31.638681 /usr/lib/systemd/system-generators/torcx-generator[1121]: time="2025-05-13T08:28:31Z" level=info msg="torcx already run" May 13 08:28:31.774493 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 08:28:31.774515 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 08:28:31.802832 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 08:28:31.881829 systemd[1]: Mounted boot.mount. May 13 08:28:31.897225 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:28:31.897473 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 08:28:31.898856 systemd[1]: Starting modprobe@dm_mod.service... May 13 08:28:31.900792 systemd[1]: Starting modprobe@efi_pstore.service... May 13 08:28:31.903673 systemd[1]: Starting modprobe@loop.service... May 13 08:28:31.906350 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 08:28:31.906513 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:28:31.906743 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:28:31.907866 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 08:28:31.908053 systemd[1]: Finished modprobe@dm_mod.service. May 13 08:28:31.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.909411 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 08:28:31.909627 systemd[1]: Finished modprobe@efi_pstore.service. May 13 08:28:31.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.915617 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 08:28:31.915836 systemd[1]: Finished modprobe@loop.service. May 13 08:28:31.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.916741 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 08:28:31.916861 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 08:28:31.930435 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:28:31.930758 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 08:28:31.936011 systemd[1]: Starting modprobe@dm_mod.service... May 13 08:28:31.939813 systemd[1]: Starting modprobe@efi_pstore.service... May 13 08:28:31.941731 systemd[1]: Starting modprobe@loop.service... May 13 08:28:31.946940 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 08:28:31.947128 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:28:31.947307 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:28:31.948564 systemd[1]: Finished systemd-boot-update.service. May 13 08:28:31.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.952364 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 08:28:31.952564 systemd[1]: Finished modprobe@dm_mod.service. May 13 08:28:31.953713 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 08:28:31.953891 systemd[1]: Finished modprobe@efi_pstore.service. May 13 08:28:31.954889 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 08:28:31.955049 systemd[1]: Finished modprobe@loop.service. May 13 08:28:31.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.957233 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 08:28:31.957342 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 08:28:31.964771 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:28:31.965084 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 08:28:31.966499 systemd[1]: Starting modprobe@dm_mod.service... May 13 08:28:31.968726 systemd[1]: Starting modprobe@drm.service... May 13 08:28:31.970278 systemd[1]: Starting modprobe@efi_pstore.service... May 13 08:28:31.972281 systemd[1]: Starting modprobe@loop.service... May 13 08:28:31.974985 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 08:28:31.975147 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:28:31.976793 systemd[1]: Starting systemd-networkd-wait-online.service... May 13 08:28:31.979346 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 08:28:31.980762 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 08:28:31.980927 systemd[1]: Finished modprobe@dm_mod.service. May 13 08:28:31.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.983037 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 08:28:31.983183 systemd[1]: Finished modprobe@drm.service. May 13 08:28:31.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.987211 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 08:28:31.987364 systemd[1]: Finished modprobe@efi_pstore.service. May 13 08:28:31.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.992243 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 08:28:31.992414 systemd[1]: Finished modprobe@loop.service. May 13 08:28:31.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.993628 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 08:28:31.993786 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 08:28:31.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:31.996218 systemd[1]: Finished ensure-sysext.service. May 13 08:28:32.013311 ldconfig[1063]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 08:28:32.028221 systemd[1]: Finished ldconfig.service. May 13 08:28:32.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:32.057136 systemd[1]: Finished systemd-tmpfiles-setup.service. May 13 08:28:32.059055 systemd[1]: Starting audit-rules.service... May 13 08:28:32.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:32.060633 systemd[1]: Starting clean-ca-certificates.service... May 13 08:28:32.063144 systemd[1]: Starting systemd-journal-catalog-update.service... May 13 08:28:32.068307 systemd[1]: Starting systemd-resolved.service... May 13 08:28:32.071496 systemd[1]: Starting systemd-timesyncd.service... May 13 08:28:32.073240 systemd[1]: Starting systemd-update-utmp.service... May 13 08:28:32.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:32.078287 systemd[1]: Finished clean-ca-certificates.service. May 13 08:28:32.079120 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 08:28:32.090000 audit[1208]: SYSTEM_BOOT pid=1208 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 13 08:28:32.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:32.093127 systemd[1]: Finished systemd-update-utmp.service. May 13 08:28:32.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:32.124403 systemd[1]: Finished systemd-journal-catalog-update.service. May 13 08:28:32.126517 systemd[1]: Starting systemd-update-done.service... May 13 08:28:32.137089 systemd[1]: Finished systemd-update-done.service. May 13 08:28:32.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 08:28:32.147000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 13 08:28:32.147000 audit[1225]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd10703d60 a2=420 a3=0 items=0 ppid=1201 pid=1225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 08:28:32.147000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 13 08:28:32.148644 systemd[1]: Finished audit-rules.service. May 13 08:28:32.149387 augenrules[1225]: No rules May 13 08:28:32.182649 systemd[1]: Started systemd-timesyncd.service. May 13 08:28:32.183453 systemd[1]: Reached target time-set.target. May 13 08:28:32.192715 systemd-resolved[1205]: Positive Trust Anchors: May 13 08:28:32.192731 systemd-resolved[1205]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 08:28:32.192770 systemd-resolved[1205]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 08:28:32.200513 systemd-resolved[1205]: Using system hostname 'ci-3510-3-7-n-dd395c61fd.novalocal'. May 13 08:28:32.202058 systemd[1]: Started systemd-resolved.service. May 13 08:28:32.202694 systemd[1]: Reached target network.target. May 13 08:28:32.203162 systemd[1]: Reached target nss-lookup.target. May 13 08:28:32.203652 systemd[1]: Reached target sysinit.target. May 13 08:28:32.204243 systemd[1]: Started motdgen.path. May 13 08:28:32.204934 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 13 08:28:32.205744 systemd[1]: Started logrotate.timer. May 13 08:28:32.205947 systemd-timesyncd[1207]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). May 13 08:28:32.205994 systemd-timesyncd[1207]: Initial clock synchronization to Tue 2025-05-13 08:28:32.169705 UTC. May 13 08:28:32.206260 systemd[1]: Started mdadm.timer. May 13 08:28:32.206687 systemd[1]: Started systemd-tmpfiles-clean.timer. May 13 08:28:32.207143 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 08:28:32.207175 systemd[1]: Reached target paths.target. May 13 08:28:32.207613 systemd[1]: Reached target timers.target. May 13 08:28:32.208422 systemd[1]: Listening on dbus.socket. May 13 08:28:32.210451 systemd[1]: Starting docker.socket... May 13 08:28:32.213471 systemd[1]: Listening on sshd.socket. May 13 08:28:32.214069 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:28:32.214572 systemd[1]: Listening on docker.socket. May 13 08:28:32.215080 systemd[1]: Reached target sockets.target. May 13 08:28:32.215503 systemd[1]: Reached target basic.target. May 13 08:28:32.216115 systemd[1]: System is tainted: cgroupsv1 May 13 08:28:32.216161 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 08:28:32.216186 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 08:28:32.217393 systemd[1]: Starting containerd.service... May 13 08:28:32.218880 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 13 08:28:32.220608 systemd[1]: Starting dbus.service... May 13 08:28:32.222177 systemd[1]: Starting enable-oem-cloudinit.service... May 13 08:28:32.223857 systemd[1]: Starting extend-filesystems.service... May 13 08:28:32.224422 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 13 08:28:32.225723 systemd[1]: Starting motdgen.service... May 13 08:28:32.227474 systemd[1]: Starting ssh-key-proc-cmdline.service... May 13 08:28:32.233032 systemd[1]: Starting sshd-keygen.service... May 13 08:28:32.236666 systemd[1]: Starting systemd-logind.service... May 13 08:28:32.241777 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 08:28:32.241850 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 08:28:32.243355 systemd[1]: Starting update-engine.service... May 13 08:28:32.246292 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 13 08:28:32.255253 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 08:28:32.256830 jq[1241]: false May 13 08:28:32.255515 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 13 08:28:32.270169 systemd[1]: Created slice system-sshd.slice. May 13 08:28:32.302616 jq[1248]: true May 13 08:28:32.277502 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 08:28:32.277792 systemd[1]: Finished ssh-key-proc-cmdline.service. May 13 08:28:32.278791 systemd-networkd[1044]: eth0: Gained IPv6LL May 13 08:28:32.280695 systemd[1]: Finished systemd-networkd-wait-online.service. May 13 08:28:32.281350 systemd[1]: Reached target network-online.target. May 13 08:28:32.284016 systemd[1]: Starting kubelet.service... May 13 08:28:32.320936 jq[1267]: true May 13 08:28:32.322241 systemd[1]: motdgen.service: Deactivated successfully. May 13 08:28:32.322538 systemd[1]: Finished motdgen.service. May 13 08:28:32.338631 extend-filesystems[1242]: Found loop1 May 13 08:28:32.338631 extend-filesystems[1242]: Found vda May 13 08:28:32.338631 extend-filesystems[1242]: Found vda1 May 13 08:28:32.338631 extend-filesystems[1242]: Found vda2 May 13 08:28:32.338631 extend-filesystems[1242]: Found vda3 May 13 08:28:32.338631 extend-filesystems[1242]: Found usr May 13 08:28:32.338631 extend-filesystems[1242]: Found vda4 May 13 08:28:32.338631 extend-filesystems[1242]: Found vda6 May 13 08:28:32.338631 extend-filesystems[1242]: Found vda7 May 13 08:28:32.338631 extend-filesystems[1242]: Found vda9 May 13 08:28:32.338631 extend-filesystems[1242]: Checking size of /dev/vda9 May 13 08:28:32.356045 systemd[1]: Started dbus.service. May 13 08:28:32.355837 dbus-daemon[1237]: [system] SELinux support is enabled May 13 08:28:32.358768 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 08:28:32.358791 systemd[1]: Reached target system-config.target. May 13 08:28:32.359326 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 08:28:32.359342 systemd[1]: Reached target user-config.target. May 13 08:28:32.384808 extend-filesystems[1242]: Resized partition /dev/vda9 May 13 08:28:32.399653 extend-filesystems[1299]: resize2fs 1.46.5 (30-Dec-2021) May 13 08:28:32.416612 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 13 08:28:32.426466 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 13 08:28:32.481694 update_engine[1247]: I0513 08:28:32.445784 1247 main.cc:92] Flatcar Update Engine starting May 13 08:28:32.483887 extend-filesystems[1299]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 08:28:32.483887 extend-filesystems[1299]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 08:28:32.483887 extend-filesystems[1299]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 13 08:28:32.492979 env[1258]: time="2025-05-13T08:28:32.483160914Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 13 08:28:32.493237 update_engine[1247]: I0513 08:28:32.491111 1247 update_check_scheduler.cc:74] Next update check in 3m40s May 13 08:28:32.484128 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 08:28:32.493333 extend-filesystems[1242]: Resized filesystem in /dev/vda9 May 13 08:28:32.484406 systemd[1]: Finished extend-filesystems.service. May 13 08:28:32.502538 bash[1300]: Updated "/home/core/.ssh/authorized_keys" May 13 08:28:32.490013 systemd[1]: Started update-engine.service. May 13 08:28:32.491296 systemd-logind[1246]: Watching system buttons on /dev/input/event1 (Power Button) May 13 08:28:32.491314 systemd-logind[1246]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 08:28:32.492774 systemd-logind[1246]: New seat seat0. May 13 08:28:32.501379 systemd[1]: Started locksmithd.service. May 13 08:28:32.503361 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 13 08:28:32.504409 systemd[1]: Started systemd-logind.service. May 13 08:28:32.550043 env[1258]: time="2025-05-13T08:28:32.549997356Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 08:28:32.556776 env[1258]: time="2025-05-13T08:28:32.556748047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 08:28:32.561405 env[1258]: time="2025-05-13T08:28:32.560289208Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 08:28:32.561405 env[1258]: time="2025-05-13T08:28:32.560325566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 08:28:32.561405 env[1258]: time="2025-05-13T08:28:32.560583170Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 08:28:32.561405 env[1258]: time="2025-05-13T08:28:32.560627673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 08:28:32.561405 env[1258]: time="2025-05-13T08:28:32.560644915Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 13 08:28:32.561405 env[1258]: time="2025-05-13T08:28:32.560657108Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 08:28:32.561405 env[1258]: time="2025-05-13T08:28:32.560743440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 08:28:32.561405 env[1258]: time="2025-05-13T08:28:32.560988179Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 08:28:32.561405 env[1258]: time="2025-05-13T08:28:32.561142018Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 08:28:32.561405 env[1258]: time="2025-05-13T08:28:32.561163207Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 08:28:32.561692 env[1258]: time="2025-05-13T08:28:32.561217249Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 13 08:28:32.561692 env[1258]: time="2025-05-13T08:28:32.561233048Z" level=info msg="metadata content store policy set" policy=shared May 13 08:28:32.581547 env[1258]: time="2025-05-13T08:28:32.580631991Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 08:28:32.581547 env[1258]: time="2025-05-13T08:28:32.580663370Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 08:28:32.581547 env[1258]: time="2025-05-13T08:28:32.580681284Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 08:28:32.581547 env[1258]: time="2025-05-13T08:28:32.580714836Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 08:28:32.581547 env[1258]: time="2025-05-13T08:28:32.580731387Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 08:28:32.581547 env[1258]: time="2025-05-13T08:28:32.580748129Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 08:28:32.581547 env[1258]: time="2025-05-13T08:28:32.580763868Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 08:28:32.581547 env[1258]: time="2025-05-13T08:28:32.580780049Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 08:28:32.581547 env[1258]: time="2025-05-13T08:28:32.580795919Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 13 08:28:32.581547 env[1258]: time="2025-05-13T08:28:32.580813962Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 08:28:32.581547 env[1258]: time="2025-05-13T08:28:32.580829772Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 08:28:32.581547 env[1258]: time="2025-05-13T08:28:32.580845061Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 08:28:32.581547 env[1258]: time="2025-05-13T08:28:32.580939818Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 08:28:32.581547 env[1258]: time="2025-05-13T08:28:32.581028014Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 08:28:32.581905 env[1258]: time="2025-05-13T08:28:32.581376117Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 08:28:32.581905 env[1258]: time="2025-05-13T08:28:32.581407576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 08:28:32.581905 env[1258]: time="2025-05-13T08:28:32.581423826Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 08:28:32.581905 env[1258]: time="2025-05-13T08:28:32.581472517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 08:28:32.581905 env[1258]: time="2025-05-13T08:28:32.581487756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 08:28:32.584746 env[1258]: time="2025-05-13T08:28:32.582043689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 08:28:32.584746 env[1258]: time="2025-05-13T08:28:32.582068115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 08:28:32.584746 env[1258]: time="2025-05-13T08:28:32.582083944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 08:28:32.584746 env[1258]: time="2025-05-13T08:28:32.582103351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 08:28:32.584746 env[1258]: time="2025-05-13T08:28:32.582117497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 08:28:32.584746 env[1258]: time="2025-05-13T08:28:32.582130962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 08:28:32.584746 env[1258]: time="2025-05-13T08:28:32.582147734Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 08:28:32.584746 env[1258]: time="2025-05-13T08:28:32.582288919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 08:28:32.584746 env[1258]: time="2025-05-13T08:28:32.582307063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 08:28:32.584746 env[1258]: time="2025-05-13T08:28:32.582320548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 08:28:32.584746 env[1258]: time="2025-05-13T08:28:32.582333282Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 08:28:32.584746 env[1258]: time="2025-05-13T08:28:32.582350755Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 13 08:28:32.584746 env[1258]: time="2025-05-13T08:28:32.582365452Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 08:28:32.584746 env[1258]: time="2025-05-13T08:28:32.582386973Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 13 08:28:32.583692 systemd[1]: Started containerd.service. May 13 08:28:32.585145 env[1258]: time="2025-05-13T08:28:32.582424543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 08:28:32.585171 env[1258]: time="2025-05-13T08:28:32.582664593Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 08:28:32.585171 env[1258]: time="2025-05-13T08:28:32.582732801Z" level=info msg="Connect containerd service" May 13 08:28:32.585171 env[1258]: time="2025-05-13T08:28:32.582773818Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 08:28:32.585171 env[1258]: time="2025-05-13T08:28:32.583239441Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 08:28:32.585171 env[1258]: time="2025-05-13T08:28:32.583455947Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 08:28:32.585171 env[1258]: time="2025-05-13T08:28:32.583496383Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 08:28:32.585171 env[1258]: time="2025-05-13T08:28:32.583541227Z" level=info msg="containerd successfully booted in 0.152052s" May 13 08:28:32.602353 env[1258]: time="2025-05-13T08:28:32.602282998Z" level=info msg="Start subscribing containerd event" May 13 08:28:32.602437 env[1258]: time="2025-05-13T08:28:32.602385430Z" level=info msg="Start recovering state" May 13 08:28:32.602515 env[1258]: time="2025-05-13T08:28:32.602493142Z" level=info msg="Start event monitor" May 13 08:28:32.602549 env[1258]: time="2025-05-13T08:28:32.602514412Z" level=info msg="Start snapshots syncer" May 13 08:28:32.602549 env[1258]: time="2025-05-13T08:28:32.602533397Z" level=info msg="Start cni network conf syncer for default" May 13 08:28:32.602549 env[1258]: time="2025-05-13T08:28:32.602542925Z" level=info msg="Start streaming server" May 13 08:28:32.767007 locksmithd[1305]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 08:28:33.337622 sshd_keygen[1273]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 08:28:33.366986 systemd[1]: Finished sshd-keygen.service. May 13 08:28:33.369706 systemd[1]: Starting issuegen.service... May 13 08:28:33.372505 systemd[1]: Started sshd@0-172.24.4.231:22-172.24.4.1:32786.service. May 13 08:28:33.382343 systemd[1]: issuegen.service: Deactivated successfully. May 13 08:28:33.382624 systemd[1]: Finished issuegen.service. May 13 08:28:33.384946 systemd[1]: Starting systemd-user-sessions.service... May 13 08:28:33.394329 systemd[1]: Finished systemd-user-sessions.service. May 13 08:28:33.396656 systemd[1]: Started getty@tty1.service. May 13 08:28:33.398316 systemd[1]: Started serial-getty@ttyS0.service. May 13 08:28:33.399263 systemd[1]: Reached target getty.target. May 13 08:28:34.172729 systemd[1]: Started kubelet.service. May 13 08:28:34.613375 sshd[1324]: Accepted publickey for core from 172.24.4.1 port 32786 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:28:34.619803 sshd[1324]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:28:34.652985 systemd[1]: Created slice user-500.slice. May 13 08:28:34.657480 systemd[1]: Starting user-runtime-dir@500.service... May 13 08:28:34.666011 systemd-logind[1246]: New session 1 of user core. May 13 08:28:34.686181 systemd[1]: Finished user-runtime-dir@500.service. May 13 08:28:34.688196 systemd[1]: Starting user@500.service... May 13 08:28:34.696164 (systemd)[1346]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 08:28:34.790354 systemd[1346]: Queued start job for default target default.target. May 13 08:28:34.790630 systemd[1346]: Reached target paths.target. May 13 08:28:34.790651 systemd[1346]: Reached target sockets.target. May 13 08:28:34.790681 systemd[1346]: Reached target timers.target. May 13 08:28:34.790695 systemd[1346]: Reached target basic.target. May 13 08:28:34.790829 systemd[1]: Started user@500.service. May 13 08:28:34.792296 systemd[1]: Started session-1.scope. May 13 08:28:34.793329 systemd[1346]: Reached target default.target. May 13 08:28:34.793630 systemd[1346]: Startup finished in 89ms. May 13 08:28:35.323873 systemd[1]: Started sshd@1-172.24.4.231:22-172.24.4.1:54498.service. May 13 08:28:35.534165 kubelet[1337]: E0513 08:28:35.534018 1337 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 08:28:35.537460 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 08:28:35.537846 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 08:28:36.550407 sshd[1356]: Accepted publickey for core from 172.24.4.1 port 54498 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:28:36.553196 sshd[1356]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:28:36.563971 systemd-logind[1246]: New session 2 of user core. May 13 08:28:36.564899 systemd[1]: Started session-2.scope. May 13 08:28:37.462184 sshd[1356]: pam_unix(sshd:session): session closed for user core May 13 08:28:37.467698 systemd[1]: Started sshd@2-172.24.4.231:22-172.24.4.1:54500.service. May 13 08:28:37.474038 systemd[1]: sshd@1-172.24.4.231:22-172.24.4.1:54498.service: Deactivated successfully. May 13 08:28:37.477227 systemd[1]: session-2.scope: Deactivated successfully. May 13 08:28:37.478346 systemd-logind[1246]: Session 2 logged out. Waiting for processes to exit. May 13 08:28:37.481504 systemd-logind[1246]: Removed session 2. May 13 08:28:38.904940 sshd[1363]: Accepted publickey for core from 172.24.4.1 port 54500 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:28:38.907701 sshd[1363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:28:38.918960 systemd-logind[1246]: New session 3 of user core. May 13 08:28:38.919474 systemd[1]: Started session-3.scope. May 13 08:28:39.362145 coreos-metadata[1236]: May 13 08:28:39.362 WARN failed to locate config-drive, using the metadata service API instead May 13 08:28:39.446890 coreos-metadata[1236]: May 13 08:28:39.446 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 13 08:28:39.642710 sshd[1363]: pam_unix(sshd:session): session closed for user core May 13 08:28:39.648656 systemd[1]: sshd@2-172.24.4.231:22-172.24.4.1:54500.service: Deactivated successfully. May 13 08:28:39.650234 systemd[1]: session-3.scope: Deactivated successfully. May 13 08:28:39.652715 systemd-logind[1246]: Session 3 logged out. Waiting for processes to exit. May 13 08:28:39.654939 systemd-logind[1246]: Removed session 3. May 13 08:28:39.711066 coreos-metadata[1236]: May 13 08:28:39.710 INFO Fetch successful May 13 08:28:39.711066 coreos-metadata[1236]: May 13 08:28:39.711 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 13 08:28:39.726014 coreos-metadata[1236]: May 13 08:28:39.725 INFO Fetch successful May 13 08:28:39.728650 unknown[1236]: wrote ssh authorized keys file for user: core May 13 08:28:39.766652 update-ssh-keys[1375]: Updated "/home/core/.ssh/authorized_keys" May 13 08:28:39.768344 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 13 08:28:39.769125 systemd[1]: Reached target multi-user.target. May 13 08:28:39.772314 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 13 08:28:39.794450 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 13 08:28:39.795377 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 13 08:28:39.796480 systemd[1]: Startup finished in 8.445s (kernel) + 14.688s (userspace) = 23.133s. May 13 08:28:45.571392 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 08:28:45.572118 systemd[1]: Stopped kubelet.service. May 13 08:28:45.576254 systemd[1]: Starting kubelet.service... May 13 08:28:45.855566 systemd[1]: Started kubelet.service. May 13 08:28:45.992582 kubelet[1388]: E0513 08:28:45.992509 1388 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 08:28:46.000471 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 08:28:46.000870 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 08:28:49.638469 systemd[1]: Started sshd@3-172.24.4.231:22-172.24.4.1:55384.service. May 13 08:28:51.101016 sshd[1395]: Accepted publickey for core from 172.24.4.1 port 55384 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:28:51.104548 sshd[1395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:28:51.116740 systemd-logind[1246]: New session 4 of user core. May 13 08:28:51.117698 systemd[1]: Started session-4.scope. May 13 08:28:51.972395 sshd[1395]: pam_unix(sshd:session): session closed for user core May 13 08:28:51.979343 systemd[1]: Started sshd@4-172.24.4.231:22-172.24.4.1:55388.service. May 13 08:28:51.983657 systemd[1]: sshd@3-172.24.4.231:22-172.24.4.1:55384.service: Deactivated successfully. May 13 08:28:51.987414 systemd[1]: session-4.scope: Deactivated successfully. May 13 08:28:51.988973 systemd-logind[1246]: Session 4 logged out. Waiting for processes to exit. May 13 08:28:51.992021 systemd-logind[1246]: Removed session 4. May 13 08:28:53.267506 sshd[1400]: Accepted publickey for core from 172.24.4.1 port 55388 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:28:53.271320 sshd[1400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:28:53.282916 systemd[1]: Started session-5.scope. May 13 08:28:53.284404 systemd-logind[1246]: New session 5 of user core. May 13 08:28:53.995257 sshd[1400]: pam_unix(sshd:session): session closed for user core May 13 08:28:53.998439 systemd[1]: Started sshd@5-172.24.4.231:22-172.24.4.1:44960.service. May 13 08:28:54.002267 systemd[1]: sshd@4-172.24.4.231:22-172.24.4.1:55388.service: Deactivated successfully. May 13 08:28:54.005291 systemd[1]: session-5.scope: Deactivated successfully. May 13 08:28:54.006872 systemd-logind[1246]: Session 5 logged out. Waiting for processes to exit. May 13 08:28:54.009671 systemd-logind[1246]: Removed session 5. May 13 08:28:55.528073 sshd[1407]: Accepted publickey for core from 172.24.4.1 port 44960 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:28:55.530912 sshd[1407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:28:55.542692 systemd[1]: Started session-6.scope. May 13 08:28:55.545776 systemd-logind[1246]: New session 6 of user core. May 13 08:28:56.071257 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 08:28:56.071747 systemd[1]: Stopped kubelet.service. May 13 08:28:56.074868 systemd[1]: Starting kubelet.service... May 13 08:28:56.266025 sshd[1407]: pam_unix(sshd:session): session closed for user core May 13 08:28:56.270290 systemd[1]: Started sshd@6-172.24.4.231:22-172.24.4.1:44976.service. May 13 08:28:56.276493 systemd[1]: sshd@5-172.24.4.231:22-172.24.4.1:44960.service: Deactivated successfully. May 13 08:28:56.281172 systemd[1]: session-6.scope: Deactivated successfully. May 13 08:28:56.282457 systemd-logind[1246]: Session 6 logged out. Waiting for processes to exit. May 13 08:28:56.286293 systemd-logind[1246]: Removed session 6. May 13 08:28:56.356891 systemd[1]: Started kubelet.service. May 13 08:28:56.524680 kubelet[1425]: E0513 08:28:56.524545 1425 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 08:28:56.528749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 08:28:56.529101 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 08:28:57.791122 sshd[1417]: Accepted publickey for core from 172.24.4.1 port 44976 ssh2: RSA SHA256:ujy1IZCwkGt29P2AJzymKYpB6P+04yS6ZPkcpK9IyQk May 13 08:28:57.793820 sshd[1417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 08:28:57.804335 systemd-logind[1246]: New session 7 of user core. May 13 08:28:57.805150 systemd[1]: Started session-7.scope. May 13 08:28:58.256389 sudo[1435]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 08:28:58.257100 sudo[1435]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 08:28:58.304966 systemd[1]: Starting coreos-metadata.service... May 13 08:29:05.368954 coreos-metadata[1439]: May 13 08:29:05.368 WARN failed to locate config-drive, using the metadata service API instead May 13 08:29:05.461223 coreos-metadata[1439]: May 13 08:29:05.460 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 13 08:29:05.650055 coreos-metadata[1439]: May 13 08:29:05.649 INFO Fetch successful May 13 08:29:05.650782 coreos-metadata[1439]: May 13 08:29:05.650 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 13 08:29:05.669933 coreos-metadata[1439]: May 13 08:29:05.669 INFO Fetch successful May 13 08:29:05.670348 coreos-metadata[1439]: May 13 08:29:05.670 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 13 08:29:05.686920 coreos-metadata[1439]: May 13 08:29:05.686 INFO Fetch successful May 13 08:29:05.687431 coreos-metadata[1439]: May 13 08:29:05.687 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 13 08:29:05.704692 coreos-metadata[1439]: May 13 08:29:05.704 INFO Fetch successful May 13 08:29:05.705049 coreos-metadata[1439]: May 13 08:29:05.704 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 13 08:29:05.717400 coreos-metadata[1439]: May 13 08:29:05.717 INFO Fetch successful May 13 08:29:05.736029 systemd[1]: Finished coreos-metadata.service. May 13 08:29:06.571435 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 08:29:06.571972 systemd[1]: Stopped kubelet.service. May 13 08:29:06.576964 systemd[1]: Starting kubelet.service... May 13 08:29:06.972147 systemd[1]: Started kubelet.service. May 13 08:29:07.072675 kubelet[1469]: E0513 08:29:07.072123 1469 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 08:29:07.074278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 08:29:07.074450 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 08:29:07.806858 systemd[1]: Stopped kubelet.service. May 13 08:29:07.810124 systemd[1]: Starting kubelet.service... May 13 08:29:07.845660 systemd[1]: Reloading. May 13 08:29:07.961910 /usr/lib/systemd/system-generators/torcx-generator[1520]: time="2025-05-13T08:29:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 08:29:07.962235 /usr/lib/systemd/system-generators/torcx-generator[1520]: time="2025-05-13T08:29:07Z" level=info msg="torcx already run" May 13 08:29:08.788610 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 08:29:08.788927 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 08:29:08.814480 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 08:29:08.917509 systemd[1]: Started kubelet.service. May 13 08:29:08.920761 systemd[1]: Stopping kubelet.service... May 13 08:29:08.922122 systemd[1]: kubelet.service: Deactivated successfully. May 13 08:29:08.922528 systemd[1]: Stopped kubelet.service. May 13 08:29:08.925286 systemd[1]: Starting kubelet.service... May 13 08:29:09.032953 systemd[1]: Started kubelet.service. May 13 08:29:09.721247 kubelet[1588]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 08:29:09.721247 kubelet[1588]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 08:29:09.721692 kubelet[1588]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 08:29:09.722173 kubelet[1588]: I0513 08:29:09.722112 1588 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 08:29:10.292665 kubelet[1588]: I0513 08:29:10.292616 1588 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 08:29:10.292665 kubelet[1588]: I0513 08:29:10.292650 1588 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 08:29:10.293257 kubelet[1588]: I0513 08:29:10.293225 1588 server.go:927] "Client rotation is on, will bootstrap in background" May 13 08:29:10.312852 kubelet[1588]: I0513 08:29:10.312781 1588 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 08:29:10.329954 kubelet[1588]: I0513 08:29:10.329892 1588 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 08:29:10.330330 kubelet[1588]: I0513 08:29:10.330278 1588 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 08:29:10.330562 kubelet[1588]: I0513 08:29:10.330314 1588 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.231","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 08:29:10.330562 kubelet[1588]: I0513 08:29:10.330555 1588 topology_manager.go:138] "Creating topology manager with none policy" May 13 08:29:10.330562 kubelet[1588]: I0513 08:29:10.330567 1588 container_manager_linux.go:301] "Creating device plugin manager" May 13 08:29:10.330965 kubelet[1588]: I0513 08:29:10.330700 1588 state_mem.go:36] "Initialized new in-memory state store" May 13 08:29:10.332278 kubelet[1588]: I0513 08:29:10.332239 1588 kubelet.go:400] "Attempting to sync node with API server" May 13 08:29:10.332278 kubelet[1588]: I0513 08:29:10.332260 1588 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 08:29:10.332453 kubelet[1588]: I0513 08:29:10.332435 1588 kubelet.go:312] "Adding apiserver pod source" May 13 08:29:10.332453 kubelet[1588]: I0513 08:29:10.332451 1588 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 08:29:10.334769 kubelet[1588]: E0513 08:29:10.334719 1588 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:10.334769 kubelet[1588]: E0513 08:29:10.334772 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:10.343536 kubelet[1588]: I0513 08:29:10.343507 1588 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 08:29:10.345511 kubelet[1588]: I0513 08:29:10.345474 1588 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 08:29:10.345660 kubelet[1588]: W0513 08:29:10.345523 1588 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 08:29:10.346152 kubelet[1588]: I0513 08:29:10.346124 1588 server.go:1264] "Started kubelet" May 13 08:29:10.346537 kubelet[1588]: I0513 08:29:10.346484 1588 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 08:29:10.348173 kubelet[1588]: I0513 08:29:10.348130 1588 server.go:455] "Adding debug handlers to kubelet server" May 13 08:29:10.354957 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 13 08:29:10.355881 kubelet[1588]: I0513 08:29:10.355847 1588 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 08:29:10.367968 kubelet[1588]: I0513 08:29:10.355973 1588 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 08:29:10.368186 kubelet[1588]: I0513 08:29:10.368099 1588 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 08:29:10.368186 kubelet[1588]: I0513 08:29:10.368185 1588 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 08:29:10.369056 kubelet[1588]: I0513 08:29:10.368991 1588 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 08:29:10.369430 kubelet[1588]: I0513 08:29:10.369379 1588 reconciler.go:26] "Reconciler: start to sync state" May 13 08:29:10.371555 kubelet[1588]: I0513 08:29:10.371523 1588 factory.go:221] Registration of the systemd container factory successfully May 13 08:29:10.371729 kubelet[1588]: I0513 08:29:10.371673 1588 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 08:29:10.375904 kubelet[1588]: I0513 08:29:10.375870 1588 factory.go:221] Registration of the containerd container factory successfully May 13 08:29:10.377625 kubelet[1588]: E0513 08:29:10.377309 1588 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.231.183f08e2bb99fc4e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.231,UID:172.24.4.231,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.24.4.231,},FirstTimestamp:2025-05-13 08:29:10.346103886 +0000 UTC m=+1.305257592,LastTimestamp:2025-05-13 08:29:10.346103886 +0000 UTC m=+1.305257592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.231,}" May 13 08:29:10.378247 kubelet[1588]: W0513 08:29:10.377963 1588 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 13 08:29:10.378488 kubelet[1588]: E0513 08:29:10.378440 1588 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 13 08:29:10.378685 kubelet[1588]: W0513 08:29:10.378152 1588 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.231" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 13 08:29:10.378937 kubelet[1588]: E0513 08:29:10.378884 1588 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.231" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 13 08:29:10.405227 kubelet[1588]: E0513 08:29:10.405187 1588 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 08:29:10.407353 kubelet[1588]: W0513 08:29:10.405753 1588 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 13 08:29:10.409808 kubelet[1588]: E0513 08:29:10.409770 1588 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 13 08:29:10.411509 kubelet[1588]: E0513 08:29:10.405845 1588 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.231\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 13 08:29:10.419808 kubelet[1588]: I0513 08:29:10.419759 1588 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 08:29:10.419808 kubelet[1588]: I0513 08:29:10.419783 1588 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 08:29:10.419808 kubelet[1588]: I0513 08:29:10.419801 1588 state_mem.go:36] "Initialized new in-memory state store" May 13 08:29:10.420411 kubelet[1588]: E0513 08:29:10.420320 1588 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.231.183f08e2bf1f0c7b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.231,UID:172.24.4.231,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.24.4.231,},FirstTimestamp:2025-05-13 08:29:10.405155963 +0000 UTC m=+1.364309729,LastTimestamp:2025-05-13 08:29:10.405155963 +0000 UTC m=+1.364309729,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.231,}" May 13 08:29:10.427659 kubelet[1588]: I0513 08:29:10.427635 1588 policy_none.go:49] "None policy: Start" May 13 08:29:10.427806 kubelet[1588]: E0513 08:29:10.427608 1588 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.231.183f08e2bff2f58a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.231,UID:172.24.4.231,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.24.4.231 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.24.4.231,},FirstTimestamp:2025-05-13 08:29:10.419043722 +0000 UTC m=+1.378197438,LastTimestamp:2025-05-13 08:29:10.419043722 +0000 UTC m=+1.378197438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.231,}" May 13 08:29:10.428463 kubelet[1588]: I0513 08:29:10.428443 1588 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 08:29:10.428522 kubelet[1588]: I0513 08:29:10.428470 1588 state_mem.go:35] "Initializing new in-memory state store" May 13 08:29:10.448525 kubelet[1588]: I0513 08:29:10.448503 1588 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 08:29:10.448900 kubelet[1588]: I0513 08:29:10.448863 1588 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 08:29:10.449090 kubelet[1588]: I0513 08:29:10.449080 1588 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 08:29:10.456432 kubelet[1588]: E0513 08:29:10.456402 1588 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.231\" not found" May 13 08:29:10.469150 kubelet[1588]: I0513 08:29:10.469107 1588 kubelet_node_status.go:73] "Attempting to register node" node="172.24.4.231" May 13 08:29:10.480032 kubelet[1588]: I0513 08:29:10.479994 1588 kubelet_node_status.go:76] "Successfully registered node" node="172.24.4.231" May 13 08:29:10.516505 kubelet[1588]: E0513 08:29:10.516453 1588 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.231\" not found" May 13 08:29:10.537124 kubelet[1588]: I0513 08:29:10.537089 1588 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 08:29:10.538733 kubelet[1588]: I0513 08:29:10.538718 1588 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 08:29:10.538869 kubelet[1588]: I0513 08:29:10.538840 1588 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 08:29:10.538997 kubelet[1588]: I0513 08:29:10.538986 1588 kubelet.go:2337] "Starting kubelet main sync loop" May 13 08:29:10.539117 kubelet[1588]: E0513 08:29:10.539104 1588 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 13 08:29:10.598970 sudo[1435]: pam_unix(sudo:session): session closed for user root May 13 08:29:10.617908 kubelet[1588]: E0513 08:29:10.617853 1588 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.231\" not found" May 13 08:29:10.718255 kubelet[1588]: E0513 08:29:10.718162 1588 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.231\" not found" May 13 08:29:10.819250 kubelet[1588]: E0513 08:29:10.819193 1588 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.231\" not found" May 13 08:29:10.920712 kubelet[1588]: E0513 08:29:10.920514 1588 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.231\" not found" May 13 08:29:10.991653 sshd[1417]: pam_unix(sshd:session): session closed for user core May 13 08:29:10.997485 systemd[1]: sshd@6-172.24.4.231:22-172.24.4.1:44976.service: Deactivated successfully. May 13 08:29:10.999305 systemd[1]: session-7.scope: Deactivated successfully. May 13 08:29:11.002279 systemd-logind[1246]: Session 7 logged out. Waiting for processes to exit. May 13 08:29:11.005136 systemd-logind[1246]: Removed session 7. May 13 08:29:11.021511 kubelet[1588]: E0513 08:29:11.021460 1588 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.231\" not found" May 13 08:29:11.122513 kubelet[1588]: E0513 08:29:11.122455 1588 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.231\" not found" May 13 08:29:11.223752 kubelet[1588]: E0513 08:29:11.223368 1588 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.231\" not found" May 13 08:29:11.298374 kubelet[1588]: I0513 08:29:11.298228 1588 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 13 08:29:11.298719 kubelet[1588]: W0513 08:29:11.298527 1588 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 13 08:29:11.323970 kubelet[1588]: E0513 08:29:11.323868 1588 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.231\" not found" May 13 08:29:11.335336 kubelet[1588]: E0513 08:29:11.335116 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:11.424147 kubelet[1588]: E0513 08:29:11.424091 1588 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.231\" not found" May 13 08:29:11.525386 kubelet[1588]: E0513 08:29:11.524758 1588 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.231\" not found" May 13 08:29:11.625752 kubelet[1588]: E0513 08:29:11.625701 1588 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.231\" not found" May 13 08:29:11.726904 kubelet[1588]: E0513 08:29:11.726852 1588 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.231\" not found" May 13 08:29:11.828194 kubelet[1588]: E0513 08:29:11.828135 1588 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.231\" not found" May 13 08:29:11.931540 kubelet[1588]: I0513 08:29:11.931488 1588 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 13 08:29:11.932212 env[1258]: time="2025-05-13T08:29:11.932140634Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 08:29:11.932872 kubelet[1588]: I0513 08:29:11.932554 1588 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 13 08:29:12.335586 kubelet[1588]: I0513 08:29:12.335542 1588 apiserver.go:52] "Watching apiserver" May 13 08:29:12.336087 kubelet[1588]: E0513 08:29:12.335808 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:12.347977 kubelet[1588]: I0513 08:29:12.347923 1588 topology_manager.go:215] "Topology Admit Handler" podUID="f40dee87-0d88-4059-9d72-333ed813361c" podNamespace="kube-system" podName="cilium-slptr" May 13 08:29:12.348201 kubelet[1588]: I0513 08:29:12.348162 1588 topology_manager.go:215] "Topology Admit Handler" podUID="a39874ee-7002-4b6d-aa3f-e4791d38014f" podNamespace="kube-system" podName="kube-proxy-pz6dx" May 13 08:29:12.371563 kubelet[1588]: I0513 08:29:12.371496 1588 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 08:29:12.384355 kubelet[1588]: I0513 08:29:12.384262 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f40dee87-0d88-4059-9d72-333ed813361c-clustermesh-secrets\") pod \"cilium-slptr\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " pod="kube-system/cilium-slptr" May 13 08:29:12.384509 kubelet[1588]: I0513 08:29:12.384432 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f40dee87-0d88-4059-9d72-333ed813361c-cilium-config-path\") pod \"cilium-slptr\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " pod="kube-system/cilium-slptr" May 13 08:29:12.384509 kubelet[1588]: I0513 08:29:12.384489 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a39874ee-7002-4b6d-aa3f-e4791d38014f-xtables-lock\") pod \"kube-proxy-pz6dx\" (UID: \"a39874ee-7002-4b6d-aa3f-e4791d38014f\") " pod="kube-system/kube-proxy-pz6dx" May 13 08:29:12.384722 kubelet[1588]: I0513 08:29:12.384532 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-bpf-maps\") pod \"cilium-slptr\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " pod="kube-system/cilium-slptr" May 13 08:29:12.384722 kubelet[1588]: I0513 08:29:12.384584 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-host-proc-sys-net\") pod \"cilium-slptr\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " pod="kube-system/cilium-slptr" May 13 08:29:12.384722 kubelet[1588]: I0513 08:29:12.384667 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-host-proc-sys-kernel\") pod \"cilium-slptr\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " pod="kube-system/cilium-slptr" May 13 08:29:12.384936 kubelet[1588]: I0513 08:29:12.384710 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbpz7\" (UniqueName: \"kubernetes.io/projected/f40dee87-0d88-4059-9d72-333ed813361c-kube-api-access-zbpz7\") pod \"cilium-slptr\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " pod="kube-system/cilium-slptr" May 13 08:29:12.384936 kubelet[1588]: I0513 08:29:12.384769 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-hostproc\") pod \"cilium-slptr\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " pod="kube-system/cilium-slptr" May 13 08:29:12.384936 kubelet[1588]: I0513 08:29:12.384808 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-cilium-cgroup\") pod \"cilium-slptr\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " pod="kube-system/cilium-slptr" May 13 08:29:12.384936 kubelet[1588]: I0513 08:29:12.384848 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-cni-path\") pod \"cilium-slptr\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " pod="kube-system/cilium-slptr" May 13 08:29:12.384936 kubelet[1588]: I0513 08:29:12.384886 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-etc-cni-netd\") pod \"cilium-slptr\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " pod="kube-system/cilium-slptr" May 13 08:29:12.384936 kubelet[1588]: I0513 08:29:12.384924 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-lib-modules\") pod \"cilium-slptr\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " pod="kube-system/cilium-slptr" May 13 08:29:12.385302 kubelet[1588]: I0513 08:29:12.384962 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-xtables-lock\") pod \"cilium-slptr\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " pod="kube-system/cilium-slptr" May 13 08:29:12.385302 kubelet[1588]: I0513 08:29:12.385013 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a39874ee-7002-4b6d-aa3f-e4791d38014f-lib-modules\") pod \"kube-proxy-pz6dx\" (UID: \"a39874ee-7002-4b6d-aa3f-e4791d38014f\") " pod="kube-system/kube-proxy-pz6dx" May 13 08:29:12.385302 kubelet[1588]: I0513 08:29:12.385061 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-cilium-run\") pod \"cilium-slptr\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " pod="kube-system/cilium-slptr" May 13 08:29:12.385302 kubelet[1588]: I0513 08:29:12.385105 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a39874ee-7002-4b6d-aa3f-e4791d38014f-kube-proxy\") pod \"kube-proxy-pz6dx\" (UID: \"a39874ee-7002-4b6d-aa3f-e4791d38014f\") " pod="kube-system/kube-proxy-pz6dx" May 13 08:29:12.385302 kubelet[1588]: I0513 08:29:12.385146 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7ng4\" (UniqueName: \"kubernetes.io/projected/a39874ee-7002-4b6d-aa3f-e4791d38014f-kube-api-access-p7ng4\") pod \"kube-proxy-pz6dx\" (UID: \"a39874ee-7002-4b6d-aa3f-e4791d38014f\") " pod="kube-system/kube-proxy-pz6dx" May 13 08:29:12.385302 kubelet[1588]: I0513 08:29:12.385187 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f40dee87-0d88-4059-9d72-333ed813361c-hubble-tls\") pod \"cilium-slptr\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " pod="kube-system/cilium-slptr" May 13 08:29:12.657930 env[1258]: time="2025-05-13T08:29:12.656016810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pz6dx,Uid:a39874ee-7002-4b6d-aa3f-e4791d38014f,Namespace:kube-system,Attempt:0,}" May 13 08:29:12.660326 env[1258]: time="2025-05-13T08:29:12.660270203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-slptr,Uid:f40dee87-0d88-4059-9d72-333ed813361c,Namespace:kube-system,Attempt:0,}" May 13 08:29:13.336403 kubelet[1588]: E0513 08:29:13.336290 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:13.462632 env[1258]: time="2025-05-13T08:29:13.462505638Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:29:13.466203 env[1258]: time="2025-05-13T08:29:13.466132361Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:29:13.471687 env[1258]: time="2025-05-13T08:29:13.471577453Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:29:13.476898 env[1258]: time="2025-05-13T08:29:13.476842249Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:29:13.482495 env[1258]: time="2025-05-13T08:29:13.482443017Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:29:13.489161 env[1258]: time="2025-05-13T08:29:13.489088987Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:29:13.494773 env[1258]: time="2025-05-13T08:29:13.494712493Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:29:13.500816 env[1258]: time="2025-05-13T08:29:13.500746730Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:29:13.513405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount573195668.mount: Deactivated successfully. May 13 08:29:13.579522 env[1258]: time="2025-05-13T08:29:13.578183474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:29:13.579522 env[1258]: time="2025-05-13T08:29:13.578279398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:29:13.579522 env[1258]: time="2025-05-13T08:29:13.578351139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:29:13.579522 env[1258]: time="2025-05-13T08:29:13.578580240Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5 pid=1644 runtime=io.containerd.runc.v2 May 13 08:29:13.594901 env[1258]: time="2025-05-13T08:29:13.593711115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:29:13.594901 env[1258]: time="2025-05-13T08:29:13.593767041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:29:13.594901 env[1258]: time="2025-05-13T08:29:13.593782046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:29:13.600200 env[1258]: time="2025-05-13T08:29:13.600003733Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a15c9b0078e1700835281f05ff2852616ebf93912472857a4074390724698b6d pid=1660 runtime=io.containerd.runc.v2 May 13 08:29:13.653562 env[1258]: time="2025-05-13T08:29:13.653507826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-slptr,Uid:f40dee87-0d88-4059-9d72-333ed813361c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\"" May 13 08:29:13.656152 env[1258]: time="2025-05-13T08:29:13.656109521Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 08:29:13.668781 env[1258]: time="2025-05-13T08:29:13.668737980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pz6dx,Uid:a39874ee-7002-4b6d-aa3f-e4791d38014f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a15c9b0078e1700835281f05ff2852616ebf93912472857a4074390724698b6d\"" May 13 08:29:14.336587 kubelet[1588]: E0513 08:29:14.336430 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:15.337091 kubelet[1588]: E0513 08:29:15.336987 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:16.337514 kubelet[1588]: E0513 08:29:16.337461 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:17.338275 kubelet[1588]: E0513 08:29:17.338179 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:17.670360 update_engine[1247]: I0513 08:29:17.669670 1247 update_attempter.cc:509] Updating boot flags... May 13 08:29:18.338647 kubelet[1588]: E0513 08:29:18.338576 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:19.339274 kubelet[1588]: E0513 08:29:19.339196 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:20.339665 kubelet[1588]: E0513 08:29:20.339572 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:21.340326 kubelet[1588]: E0513 08:29:21.340106 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:22.156269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount263027558.mount: Deactivated successfully. May 13 08:29:22.341223 kubelet[1588]: E0513 08:29:22.341072 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:23.341575 kubelet[1588]: E0513 08:29:23.341455 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:24.341963 kubelet[1588]: E0513 08:29:24.341684 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:25.342346 kubelet[1588]: E0513 08:29:25.342167 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:26.342749 kubelet[1588]: E0513 08:29:26.342653 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:26.859098 env[1258]: time="2025-05-13T08:29:26.859011407Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:29:26.861857 env[1258]: time="2025-05-13T08:29:26.861805580Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:29:26.864920 env[1258]: time="2025-05-13T08:29:26.864857420Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:29:26.866758 env[1258]: time="2025-05-13T08:29:26.866700170Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 08:29:26.870636 env[1258]: time="2025-05-13T08:29:26.870556398Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 08:29:26.873881 env[1258]: time="2025-05-13T08:29:26.873823333Z" level=info msg="CreateContainer within sandbox \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 08:29:26.899570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount127558264.mount: Deactivated successfully. May 13 08:29:26.913417 env[1258]: time="2025-05-13T08:29:26.913285935Z" level=info msg="CreateContainer within sandbox \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1193d17f867cc78dfb1c732b907076d918618aabb86f4e548441495f4d23628b\"" May 13 08:29:26.915984 env[1258]: time="2025-05-13T08:29:26.915902038Z" level=info msg="StartContainer for \"1193d17f867cc78dfb1c732b907076d918618aabb86f4e548441495f4d23628b\"" May 13 08:29:27.038499 env[1258]: time="2025-05-13T08:29:27.038244686Z" level=info msg="StartContainer for \"1193d17f867cc78dfb1c732b907076d918618aabb86f4e548441495f4d23628b\" returns successfully" May 13 08:29:27.343766 kubelet[1588]: E0513 08:29:27.343667 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:27.888816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1193d17f867cc78dfb1c732b907076d918618aabb86f4e548441495f4d23628b-rootfs.mount: Deactivated successfully. May 13 08:29:28.344243 kubelet[1588]: E0513 08:29:28.344084 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:28.601886 env[1258]: time="2025-05-13T08:29:28.601665029Z" level=info msg="shim disconnected" id=1193d17f867cc78dfb1c732b907076d918618aabb86f4e548441495f4d23628b May 13 08:29:28.603347 env[1258]: time="2025-05-13T08:29:28.603285824Z" level=warning msg="cleaning up after shim disconnected" id=1193d17f867cc78dfb1c732b907076d918618aabb86f4e548441495f4d23628b namespace=k8s.io May 13 08:29:28.603586 env[1258]: time="2025-05-13T08:29:28.603529996Z" level=info msg="cleaning up dead shim" May 13 08:29:28.635082 env[1258]: time="2025-05-13T08:29:28.634959946Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:29:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1782 runtime=io.containerd.runc.v2\n" May 13 08:29:29.344980 kubelet[1588]: E0513 08:29:29.344891 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:29.646727 env[1258]: time="2025-05-13T08:29:29.646512884Z" level=info msg="CreateContainer within sandbox \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 08:29:29.673945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4040745546.mount: Deactivated successfully. May 13 08:29:29.689800 env[1258]: time="2025-05-13T08:29:29.689724150Z" level=info msg="CreateContainer within sandbox \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4a90604bf3776f0989f13cc441cc6385865de56c5a3ff5f7db5f843db3f33667\"" May 13 08:29:29.695860 env[1258]: time="2025-05-13T08:29:29.695810646Z" level=info msg="StartContainer for \"4a90604bf3776f0989f13cc441cc6385865de56c5a3ff5f7db5f843db3f33667\"" May 13 08:29:29.799882 env[1258]: time="2025-05-13T08:29:29.799829140Z" level=info msg="StartContainer for \"4a90604bf3776f0989f13cc441cc6385865de56c5a3ff5f7db5f843db3f33667\" returns successfully" May 13 08:29:29.804242 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 08:29:29.804505 systemd[1]: Stopped systemd-sysctl.service. May 13 08:29:29.805338 systemd[1]: Stopping systemd-sysctl.service... May 13 08:29:29.807629 systemd[1]: Starting systemd-sysctl.service... May 13 08:29:29.820069 systemd[1]: Finished systemd-sysctl.service. May 13 08:29:29.979409 env[1258]: time="2025-05-13T08:29:29.979237876Z" level=info msg="shim disconnected" id=4a90604bf3776f0989f13cc441cc6385865de56c5a3ff5f7db5f843db3f33667 May 13 08:29:29.979409 env[1258]: time="2025-05-13T08:29:29.979360228Z" level=warning msg="cleaning up after shim disconnected" id=4a90604bf3776f0989f13cc441cc6385865de56c5a3ff5f7db5f843db3f33667 namespace=k8s.io May 13 08:29:29.980782 env[1258]: time="2025-05-13T08:29:29.980720847Z" level=info msg="cleaning up dead shim" May 13 08:29:30.011198 env[1258]: time="2025-05-13T08:29:30.011102527Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:29:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1845 runtime=io.containerd.runc.v2\ntime=\"2025-05-13T08:29:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" May 13 08:29:30.305293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a90604bf3776f0989f13cc441cc6385865de56c5a3ff5f7db5f843db3f33667-rootfs.mount: Deactivated successfully. May 13 08:29:30.305986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1986786010.mount: Deactivated successfully. May 13 08:29:30.333677 kubelet[1588]: E0513 08:29:30.333474 1588 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:30.346210 kubelet[1588]: E0513 08:29:30.346079 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:30.644086 env[1258]: time="2025-05-13T08:29:30.644020441Z" level=info msg="CreateContainer within sandbox \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 08:29:30.937994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount94327506.mount: Deactivated successfully. May 13 08:29:30.945920 env[1258]: time="2025-05-13T08:29:30.945833233Z" level=info msg="CreateContainer within sandbox \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6021390bf4db312822ed8de0cc84627cc78f9e0429c9a6bbb2c85029ee41ff2f\"" May 13 08:29:30.948056 env[1258]: time="2025-05-13T08:29:30.948005112Z" level=info msg="StartContainer for \"6021390bf4db312822ed8de0cc84627cc78f9e0429c9a6bbb2c85029ee41ff2f\"" May 13 08:29:31.103283 env[1258]: time="2025-05-13T08:29:31.103222214Z" level=info msg="StartContainer for \"6021390bf4db312822ed8de0cc84627cc78f9e0429c9a6bbb2c85029ee41ff2f\" returns successfully" May 13 08:29:31.305168 systemd[1]: run-containerd-runc-k8s.io-6021390bf4db312822ed8de0cc84627cc78f9e0429c9a6bbb2c85029ee41ff2f-runc.MkO87G.mount: Deactivated successfully. May 13 08:29:31.378876 kubelet[1588]: E0513 08:29:31.347340 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:31.305504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6021390bf4db312822ed8de0cc84627cc78f9e0429c9a6bbb2c85029ee41ff2f-rootfs.mount: Deactivated successfully. May 13 08:29:31.418006 env[1258]: time="2025-05-13T08:29:31.417889985Z" level=info msg="shim disconnected" id=6021390bf4db312822ed8de0cc84627cc78f9e0429c9a6bbb2c85029ee41ff2f May 13 08:29:31.418006 env[1258]: time="2025-05-13T08:29:31.417996088Z" level=warning msg="cleaning up after shim disconnected" id=6021390bf4db312822ed8de0cc84627cc78f9e0429c9a6bbb2c85029ee41ff2f namespace=k8s.io May 13 08:29:31.418006 env[1258]: time="2025-05-13T08:29:31.418021915Z" level=info msg="cleaning up dead shim" May 13 08:29:31.465856 env[1258]: time="2025-05-13T08:29:31.465759787Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:29:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1903 runtime=io.containerd.runc.v2\n" May 13 08:29:31.647038 env[1258]: time="2025-05-13T08:29:31.646924543Z" level=info msg="CreateContainer within sandbox \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 08:29:32.267519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4073526526.mount: Deactivated successfully. May 13 08:29:32.286467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3880931899.mount: Deactivated successfully. May 13 08:29:32.348417 kubelet[1588]: E0513 08:29:32.348288 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:32.407555 env[1258]: time="2025-05-13T08:29:32.407476306Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:29:32.416635 env[1258]: time="2025-05-13T08:29:32.416501471Z" level=info msg="CreateContainer within sandbox \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fdc369ab8b891ee49fb3c7040ea6b1457ea6f461f2d3b2e7fe03bd7aef399ef5\"" May 13 08:29:32.418580 env[1258]: time="2025-05-13T08:29:32.418498896Z" level=info msg="StartContainer for \"fdc369ab8b891ee49fb3c7040ea6b1457ea6f461f2d3b2e7fe03bd7aef399ef5\"" May 13 08:29:32.422943 env[1258]: time="2025-05-13T08:29:32.422865277Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:29:32.427224 env[1258]: time="2025-05-13T08:29:32.427126085Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:29:32.432043 env[1258]: time="2025-05-13T08:29:32.431969626Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:29:32.433879 env[1258]: time="2025-05-13T08:29:32.433821387Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 13 08:29:32.441001 env[1258]: time="2025-05-13T08:29:32.440935503Z" level=info msg="CreateContainer within sandbox \"a15c9b0078e1700835281f05ff2852616ebf93912472857a4074390724698b6d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 08:29:32.481485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount282721522.mount: Deactivated successfully. May 13 08:29:32.499045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1970737594.mount: Deactivated successfully. May 13 08:29:32.520418 env[1258]: time="2025-05-13T08:29:32.520311027Z" level=info msg="CreateContainer within sandbox \"a15c9b0078e1700835281f05ff2852616ebf93912472857a4074390724698b6d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"84fb5b050e09a87f6ce2cd74411baa1c4b7e07c698d68041ad65c044013de0e2\"" May 13 08:29:32.521931 env[1258]: time="2025-05-13T08:29:32.521905918Z" level=info msg="StartContainer for \"84fb5b050e09a87f6ce2cd74411baa1c4b7e07c698d68041ad65c044013de0e2\"" May 13 08:29:32.545428 env[1258]: time="2025-05-13T08:29:32.545363738Z" level=info msg="StartContainer for \"fdc369ab8b891ee49fb3c7040ea6b1457ea6f461f2d3b2e7fe03bd7aef399ef5\" returns successfully" May 13 08:29:32.649779 env[1258]: time="2025-05-13T08:29:32.649724060Z" level=info msg="shim disconnected" id=fdc369ab8b891ee49fb3c7040ea6b1457ea6f461f2d3b2e7fe03bd7aef399ef5 May 13 08:29:32.649779 env[1258]: time="2025-05-13T08:29:32.649770755Z" level=warning msg="cleaning up after shim disconnected" id=fdc369ab8b891ee49fb3c7040ea6b1457ea6f461f2d3b2e7fe03bd7aef399ef5 namespace=k8s.io May 13 08:29:32.649779 env[1258]: time="2025-05-13T08:29:32.649781635Z" level=info msg="cleaning up dead shim" May 13 08:29:32.650381 env[1258]: time="2025-05-13T08:29:32.650344031Z" level=info msg="StartContainer for \"84fb5b050e09a87f6ce2cd74411baa1c4b7e07c698d68041ad65c044013de0e2\" returns successfully" May 13 08:29:32.669942 env[1258]: time="2025-05-13T08:29:32.669868462Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:29:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1997 runtime=io.containerd.runc.v2\n" May 13 08:29:33.349354 kubelet[1588]: E0513 08:29:33.349245 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:33.453825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdc369ab8b891ee49fb3c7040ea6b1457ea6f461f2d3b2e7fe03bd7aef399ef5-rootfs.mount: Deactivated successfully. May 13 08:29:33.677635 env[1258]: time="2025-05-13T08:29:33.677342362Z" level=info msg="CreateContainer within sandbox \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 08:29:33.710576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3214069756.mount: Deactivated successfully. May 13 08:29:33.712570 kubelet[1588]: I0513 08:29:33.712139 1588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pz6dx" podStartSLOduration=4.944881896 podStartE2EDuration="23.711954281s" podCreationTimestamp="2025-05-13 08:29:10 +0000 UTC" firstStartedPulling="2025-05-13 08:29:13.670124153 +0000 UTC m=+4.629277859" lastFinishedPulling="2025-05-13 08:29:32.437196477 +0000 UTC m=+23.396350244" observedRunningTime="2025-05-13 08:29:32.765228433 +0000 UTC m=+23.724382139" watchObservedRunningTime="2025-05-13 08:29:33.711954281 +0000 UTC m=+24.671108027" May 13 08:29:33.734184 env[1258]: time="2025-05-13T08:29:33.734100987Z" level=info msg="CreateContainer within sandbox \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0f47c897af31e487c90b9ae080259c542bf7e1d455f155185bdfc7af1095da0b\"" May 13 08:29:33.735887 env[1258]: time="2025-05-13T08:29:33.735822044Z" level=info msg="StartContainer for \"0f47c897af31e487c90b9ae080259c542bf7e1d455f155185bdfc7af1095da0b\"" May 13 08:29:33.839816 env[1258]: time="2025-05-13T08:29:33.838839597Z" level=info msg="StartContainer for \"0f47c897af31e487c90b9ae080259c542bf7e1d455f155185bdfc7af1095da0b\" returns successfully" May 13 08:29:33.962180 kubelet[1588]: I0513 08:29:33.962057 1588 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 08:29:34.349923 kubelet[1588]: E0513 08:29:34.349822 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:34.353685 kernel: Initializing XFRM netlink socket May 13 08:29:34.728570 kubelet[1588]: I0513 08:29:34.728360 1588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-slptr" podStartSLOduration=11.514981489 podStartE2EDuration="24.728296227s" podCreationTimestamp="2025-05-13 08:29:10 +0000 UTC" firstStartedPulling="2025-05-13 08:29:13.655396325 +0000 UTC m=+4.614550041" lastFinishedPulling="2025-05-13 08:29:26.868711033 +0000 UTC m=+17.827864779" observedRunningTime="2025-05-13 08:29:34.721785903 +0000 UTC m=+25.680939649" watchObservedRunningTime="2025-05-13 08:29:34.728296227 +0000 UTC m=+25.687449983" May 13 08:29:35.350253 kubelet[1588]: E0513 08:29:35.350195 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:36.206427 systemd-networkd[1044]: cilium_host: Link UP May 13 08:29:36.214984 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 13 08:29:36.215156 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 13 08:29:36.207927 systemd-networkd[1044]: cilium_net: Link UP May 13 08:29:36.213922 systemd-networkd[1044]: cilium_net: Gained carrier May 13 08:29:36.218332 systemd-networkd[1044]: cilium_host: Gained carrier May 13 08:29:36.338478 systemd-networkd[1044]: cilium_vxlan: Link UP May 13 08:29:36.338485 systemd-networkd[1044]: cilium_vxlan: Gained carrier May 13 08:29:36.352183 kubelet[1588]: E0513 08:29:36.352013 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:36.616657 kernel: NET: Registered PF_ALG protocol family May 13 08:29:36.917906 systemd-networkd[1044]: cilium_net: Gained IPv6LL May 13 08:29:36.971378 kubelet[1588]: I0513 08:29:36.971278 1588 topology_manager.go:215] "Topology Admit Handler" podUID="6cc6c116-007c-47bf-bf79-63f59275c3aa" podNamespace="default" podName="nginx-deployment-85f456d6dd-cfq7n" May 13 08:29:37.099999 kubelet[1588]: I0513 08:29:37.099936 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gfmx\" (UniqueName: \"kubernetes.io/projected/6cc6c116-007c-47bf-bf79-63f59275c3aa-kube-api-access-9gfmx\") pod \"nginx-deployment-85f456d6dd-cfq7n\" (UID: \"6cc6c116-007c-47bf-bf79-63f59275c3aa\") " pod="default/nginx-deployment-85f456d6dd-cfq7n" May 13 08:29:37.109804 systemd-networkd[1044]: cilium_host: Gained IPv6LL May 13 08:29:37.280615 env[1258]: time="2025-05-13T08:29:37.280006114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfq7n,Uid:6cc6c116-007c-47bf-bf79-63f59275c3aa,Namespace:default,Attempt:0,}" May 13 08:29:37.353446 kubelet[1588]: E0513 08:29:37.353355 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:37.581026 systemd-networkd[1044]: lxc_health: Link UP May 13 08:29:37.593848 systemd-networkd[1044]: lxc_health: Gained carrier May 13 08:29:37.597633 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 08:29:37.858051 systemd-networkd[1044]: lxca83ff1bc73de: Link UP May 13 08:29:37.864628 kernel: eth0: renamed from tmp75b1d May 13 08:29:37.874192 systemd-networkd[1044]: lxca83ff1bc73de: Gained carrier May 13 08:29:37.874705 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca83ff1bc73de: link becomes ready May 13 08:29:37.877757 systemd-networkd[1044]: cilium_vxlan: Gained IPv6LL May 13 08:29:38.353736 kubelet[1588]: E0513 08:29:38.353653 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:38.966935 systemd-networkd[1044]: lxc_health: Gained IPv6LL May 13 08:29:39.157860 systemd-networkd[1044]: lxca83ff1bc73de: Gained IPv6LL May 13 08:29:39.354926 kubelet[1588]: E0513 08:29:39.354833 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:40.356727 kubelet[1588]: E0513 08:29:40.356660 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:41.358349 kubelet[1588]: E0513 08:29:41.358287 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:42.359191 kubelet[1588]: E0513 08:29:42.359136 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:42.903861 env[1258]: time="2025-05-13T08:29:42.903738911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:29:42.904813 env[1258]: time="2025-05-13T08:29:42.904732964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:29:42.905048 env[1258]: time="2025-05-13T08:29:42.904979147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:29:42.905554 env[1258]: time="2025-05-13T08:29:42.905491331Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/75b1db0bdbc7fb93815d9ce953f80ddfa23869739056117c42c43b8fa775b4cf pid=2648 runtime=io.containerd.runc.v2 May 13 08:29:43.000823 env[1258]: time="2025-05-13T08:29:43.000753949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-cfq7n,Uid:6cc6c116-007c-47bf-bf79-63f59275c3aa,Namespace:default,Attempt:0,} returns sandbox id \"75b1db0bdbc7fb93815d9ce953f80ddfa23869739056117c42c43b8fa775b4cf\"" May 13 08:29:43.005100 env[1258]: time="2025-05-13T08:29:43.005055349Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 08:29:43.361178 kubelet[1588]: E0513 08:29:43.361049 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:44.362141 kubelet[1588]: E0513 08:29:44.361901 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:45.362209 kubelet[1588]: E0513 08:29:45.362142 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:46.363517 kubelet[1588]: E0513 08:29:46.363383 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:47.364311 kubelet[1588]: E0513 08:29:47.364224 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:47.588798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3644026129.mount: Deactivated successfully. May 13 08:29:48.365391 kubelet[1588]: E0513 08:29:48.365261 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:49.365507 kubelet[1588]: E0513 08:29:49.365453 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:50.333254 kubelet[1588]: E0513 08:29:50.333094 1588 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:50.355011 env[1258]: time="2025-05-13T08:29:50.354843276Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:29:50.359554 env[1258]: time="2025-05-13T08:29:50.359484387Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:29:50.365701 env[1258]: time="2025-05-13T08:29:50.365481457Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:29:50.366511 kubelet[1588]: E0513 08:29:50.366469 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:50.376349 env[1258]: time="2025-05-13T08:29:50.376242445Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:29:50.381186 env[1258]: time="2025-05-13T08:29:50.380284950Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 13 08:29:50.385676 env[1258]: time="2025-05-13T08:29:50.385641826Z" level=info msg="CreateContainer within sandbox \"75b1db0bdbc7fb93815d9ce953f80ddfa23869739056117c42c43b8fa775b4cf\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 13 08:29:50.416576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1581433457.mount: Deactivated successfully. May 13 08:29:50.419127 env[1258]: time="2025-05-13T08:29:50.418982726Z" level=info msg="CreateContainer within sandbox \"75b1db0bdbc7fb93815d9ce953f80ddfa23869739056117c42c43b8fa775b4cf\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"2619d6f627ce24bc8a20c77d7b748b86a559e7121f504f4f5af076e4f438c11d\"" May 13 08:29:50.420924 env[1258]: time="2025-05-13T08:29:50.420802762Z" level=info msg="StartContainer for \"2619d6f627ce24bc8a20c77d7b748b86a559e7121f504f4f5af076e4f438c11d\"" May 13 08:29:50.651494 env[1258]: time="2025-05-13T08:29:50.651381674Z" level=info msg="StartContainer for \"2619d6f627ce24bc8a20c77d7b748b86a559e7121f504f4f5af076e4f438c11d\" returns successfully" May 13 08:29:50.792964 kubelet[1588]: I0513 08:29:50.792815 1588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-cfq7n" podStartSLOduration=7.413260737 podStartE2EDuration="14.792751972s" podCreationTimestamp="2025-05-13 08:29:36 +0000 UTC" firstStartedPulling="2025-05-13 08:29:43.003844717 +0000 UTC m=+33.962998473" lastFinishedPulling="2025-05-13 08:29:50.383336002 +0000 UTC m=+41.342489708" observedRunningTime="2025-05-13 08:29:50.790492424 +0000 UTC m=+41.749646180" watchObservedRunningTime="2025-05-13 08:29:50.792751972 +0000 UTC m=+41.751905728" May 13 08:29:51.366929 kubelet[1588]: E0513 08:29:51.366880 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:52.368323 kubelet[1588]: E0513 08:29:52.368213 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:53.369442 kubelet[1588]: E0513 08:29:53.369299 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:54.370283 kubelet[1588]: E0513 08:29:54.370212 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:55.370941 kubelet[1588]: E0513 08:29:55.370789 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:56.372995 kubelet[1588]: E0513 08:29:56.372878 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:57.373834 kubelet[1588]: E0513 08:29:57.373763 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:58.375899 kubelet[1588]: E0513 08:29:58.375783 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:29:59.376867 kubelet[1588]: E0513 08:29:59.376816 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:00.378709 kubelet[1588]: E0513 08:30:00.378573 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:01.186256 kubelet[1588]: I0513 08:30:01.185658 1588 topology_manager.go:215] "Topology Admit Handler" podUID="bbb6cda0-c05b-4679-bd0a-c12320a2849b" podNamespace="default" podName="nfs-server-provisioner-0" May 13 08:30:01.261053 kubelet[1588]: I0513 08:30:01.260927 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-789qd\" (UniqueName: \"kubernetes.io/projected/bbb6cda0-c05b-4679-bd0a-c12320a2849b-kube-api-access-789qd\") pod \"nfs-server-provisioner-0\" (UID: \"bbb6cda0-c05b-4679-bd0a-c12320a2849b\") " pod="default/nfs-server-provisioner-0" May 13 08:30:01.261642 kubelet[1588]: I0513 08:30:01.261531 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/bbb6cda0-c05b-4679-bd0a-c12320a2849b-data\") pod \"nfs-server-provisioner-0\" (UID: \"bbb6cda0-c05b-4679-bd0a-c12320a2849b\") " pod="default/nfs-server-provisioner-0" May 13 08:30:01.379455 kubelet[1588]: E0513 08:30:01.379330 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:01.497775 env[1258]: time="2025-05-13T08:30:01.496989885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:bbb6cda0-c05b-4679-bd0a-c12320a2849b,Namespace:default,Attempt:0,}" May 13 08:30:01.668288 systemd-networkd[1044]: lxce042ed705cdf: Link UP May 13 08:30:01.680652 kernel: eth0: renamed from tmpa0765 May 13 08:30:01.695697 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 08:30:01.695867 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce042ed705cdf: link becomes ready May 13 08:30:01.699043 systemd-networkd[1044]: lxce042ed705cdf: Gained carrier May 13 08:30:02.052646 env[1258]: time="2025-05-13T08:30:02.052312851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:30:02.052646 env[1258]: time="2025-05-13T08:30:02.052391688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:30:02.052646 env[1258]: time="2025-05-13T08:30:02.052416524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:30:02.053153 env[1258]: time="2025-05-13T08:30:02.053062233Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0765c4c3f61b6425b1aec334c1aca9301ef49f7e30a2855876f68cbb53bca31 pid=2771 runtime=io.containerd.runc.v2 May 13 08:30:02.200913 env[1258]: time="2025-05-13T08:30:02.200864794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:bbb6cda0-c05b-4679-bd0a-c12320a2849b,Namespace:default,Attempt:0,} returns sandbox id \"a0765c4c3f61b6425b1aec334c1aca9301ef49f7e30a2855876f68cbb53bca31\"" May 13 08:30:02.202806 env[1258]: time="2025-05-13T08:30:02.202763808Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 13 08:30:02.381097 kubelet[1588]: E0513 08:30:02.381007 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:02.390367 systemd[1]: run-containerd-runc-k8s.io-a0765c4c3f61b6425b1aec334c1aca9301ef49f7e30a2855876f68cbb53bca31-runc.cNCi0U.mount: Deactivated successfully. May 13 08:30:03.382201 kubelet[1588]: E0513 08:30:03.382135 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:03.735308 systemd-networkd[1044]: lxce042ed705cdf: Gained IPv6LL May 13 08:30:04.383483 kubelet[1588]: E0513 08:30:04.383368 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:05.384212 kubelet[1588]: E0513 08:30:05.384151 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:06.385255 kubelet[1588]: E0513 08:30:06.385176 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:06.754754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount567733832.mount: Deactivated successfully. May 13 08:30:07.385955 kubelet[1588]: E0513 08:30:07.385866 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:08.386558 kubelet[1588]: E0513 08:30:08.386486 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:09.387747 kubelet[1588]: E0513 08:30:09.387647 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:10.332858 kubelet[1588]: E0513 08:30:10.332804 1588 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:10.387875 kubelet[1588]: E0513 08:30:10.387816 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:10.825844 env[1258]: time="2025-05-13T08:30:10.825758802Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:30:10.830582 env[1258]: time="2025-05-13T08:30:10.830511300Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:30:10.837138 env[1258]: time="2025-05-13T08:30:10.837064826Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:30:10.842481 env[1258]: time="2025-05-13T08:30:10.842413732Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:30:10.843944 env[1258]: time="2025-05-13T08:30:10.843884638Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 13 08:30:10.849465 env[1258]: time="2025-05-13T08:30:10.849390546Z" level=info msg="CreateContainer within sandbox \"a0765c4c3f61b6425b1aec334c1aca9301ef49f7e30a2855876f68cbb53bca31\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 13 08:30:10.863522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2489995860.mount: Deactivated successfully. May 13 08:30:10.877122 env[1258]: time="2025-05-13T08:30:10.877053666Z" level=info msg="CreateContainer within sandbox \"a0765c4c3f61b6425b1aec334c1aca9301ef49f7e30a2855876f68cbb53bca31\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"e1ab99503461467f8b4e95858352599c96ceb03e9d2a54a4e3645649ccea4adb\"" May 13 08:30:10.878108 env[1258]: time="2025-05-13T08:30:10.878044480Z" level=info msg="StartContainer for \"e1ab99503461467f8b4e95858352599c96ceb03e9d2a54a4e3645649ccea4adb\"" May 13 08:30:10.974431 env[1258]: time="2025-05-13T08:30:10.974377284Z" level=info msg="StartContainer for \"e1ab99503461467f8b4e95858352599c96ceb03e9d2a54a4e3645649ccea4adb\" returns successfully" May 13 08:30:11.388819 kubelet[1588]: E0513 08:30:11.388732 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:11.864061 systemd[1]: run-containerd-runc-k8s.io-e1ab99503461467f8b4e95858352599c96ceb03e9d2a54a4e3645649ccea4adb-runc.QqKtWj.mount: Deactivated successfully. May 13 08:30:11.926506 kubelet[1588]: I0513 08:30:11.925725 1588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.282665085 podStartE2EDuration="10.925683632s" podCreationTimestamp="2025-05-13 08:30:01 +0000 UTC" firstStartedPulling="2025-05-13 08:30:02.202427615 +0000 UTC m=+53.161581321" lastFinishedPulling="2025-05-13 08:30:10.845446111 +0000 UTC m=+61.804599868" observedRunningTime="2025-05-13 08:30:11.922787576 +0000 UTC m=+62.881941392" watchObservedRunningTime="2025-05-13 08:30:11.925683632 +0000 UTC m=+62.884837379" May 13 08:30:12.390085 kubelet[1588]: E0513 08:30:12.389714 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:13.390952 kubelet[1588]: E0513 08:30:13.390817 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:14.392309 kubelet[1588]: E0513 08:30:14.392145 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:15.392994 kubelet[1588]: E0513 08:30:15.392855 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:16.393881 kubelet[1588]: E0513 08:30:16.393818 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:17.395855 kubelet[1588]: E0513 08:30:17.395745 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:18.397073 kubelet[1588]: E0513 08:30:18.396979 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:19.398459 kubelet[1588]: E0513 08:30:19.398352 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:20.400272 kubelet[1588]: E0513 08:30:20.400110 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:20.834822 kubelet[1588]: I0513 08:30:20.834616 1588 topology_manager.go:215] "Topology Admit Handler" podUID="bf48bd1f-b958-4d64-bc71-2ae0d67d9c88" podNamespace="default" podName="test-pod-1" May 13 08:30:21.011862 kubelet[1588]: I0513 08:30:21.011690 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6mkh\" (UniqueName: \"kubernetes.io/projected/bf48bd1f-b958-4d64-bc71-2ae0d67d9c88-kube-api-access-r6mkh\") pod \"test-pod-1\" (UID: \"bf48bd1f-b958-4d64-bc71-2ae0d67d9c88\") " pod="default/test-pod-1" May 13 08:30:21.012785 kubelet[1588]: I0513 08:30:21.011912 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e7c9c34b-24af-4244-9735-14f5d699449f\" (UniqueName: \"kubernetes.io/nfs/bf48bd1f-b958-4d64-bc71-2ae0d67d9c88-pvc-e7c9c34b-24af-4244-9735-14f5d699449f\") pod \"test-pod-1\" (UID: \"bf48bd1f-b958-4d64-bc71-2ae0d67d9c88\") " pod="default/test-pod-1" May 13 08:30:21.216650 kernel: FS-Cache: Loaded May 13 08:30:21.300663 kernel: RPC: Registered named UNIX socket transport module. May 13 08:30:21.301002 kernel: RPC: Registered udp transport module. May 13 08:30:21.301121 kernel: RPC: Registered tcp transport module. May 13 08:30:21.301706 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 13 08:30:21.395814 kernel: FS-Cache: Netfs 'nfs' registered for caching May 13 08:30:21.401571 kubelet[1588]: E0513 08:30:21.401447 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:21.632120 kernel: NFS: Registering the id_resolver key type May 13 08:30:21.632443 kernel: Key type id_resolver registered May 13 08:30:21.632514 kernel: Key type id_legacy registered May 13 08:30:21.703919 nfsidmap[2890]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' May 13 08:30:21.717235 nfsidmap[2891]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' May 13 08:30:21.751725 env[1258]: time="2025-05-13T08:30:21.751565109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:bf48bd1f-b958-4d64-bc71-2ae0d67d9c88,Namespace:default,Attempt:0,}" May 13 08:30:21.852143 systemd-networkd[1044]: lxc912e4bc495dd: Link UP May 13 08:30:21.859284 kernel: eth0: renamed from tmp46103 May 13 08:30:21.864686 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 08:30:21.864917 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc912e4bc495dd: link becomes ready May 13 08:30:21.865223 systemd-networkd[1044]: lxc912e4bc495dd: Gained carrier May 13 08:30:22.114169 env[1258]: time="2025-05-13T08:30:22.114094504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:30:22.114395 env[1258]: time="2025-05-13T08:30:22.114370638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:30:22.114504 env[1258]: time="2025-05-13T08:30:22.114481054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:30:22.114920 env[1258]: time="2025-05-13T08:30:22.114855251Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/46103b0b222efd896784e084867355e77a2ae721f7e3e88f80c403edb68f6d9c pid=2919 runtime=io.containerd.runc.v2 May 13 08:30:22.203039 env[1258]: time="2025-05-13T08:30:22.202963429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:bf48bd1f-b958-4d64-bc71-2ae0d67d9c88,Namespace:default,Attempt:0,} returns sandbox id \"46103b0b222efd896784e084867355e77a2ae721f7e3e88f80c403edb68f6d9c\"" May 13 08:30:22.205836 env[1258]: time="2025-05-13T08:30:22.205785879Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 08:30:22.402382 kubelet[1588]: E0513 08:30:22.402102 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:22.685264 env[1258]: time="2025-05-13T08:30:22.685059473Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:30:22.690326 env[1258]: time="2025-05-13T08:30:22.690267596Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:30:22.694932 env[1258]: time="2025-05-13T08:30:22.694880691Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:30:22.699806 env[1258]: time="2025-05-13T08:30:22.699750105Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:30:22.702211 env[1258]: time="2025-05-13T08:30:22.702149625Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 13 08:30:22.710754 env[1258]: time="2025-05-13T08:30:22.710650806Z" level=info msg="CreateContainer within sandbox \"46103b0b222efd896784e084867355e77a2ae721f7e3e88f80c403edb68f6d9c\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 13 08:30:22.756247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2934348333.mount: Deactivated successfully. May 13 08:30:22.762659 env[1258]: time="2025-05-13T08:30:22.760552135Z" level=info msg="CreateContainer within sandbox \"46103b0b222efd896784e084867355e77a2ae721f7e3e88f80c403edb68f6d9c\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"08faa92222142a5b39c96fc8437836cdeeec6c3a8a374cc9b9a611d842c24164\"" May 13 08:30:22.764749 env[1258]: time="2025-05-13T08:30:22.762788753Z" level=info msg="StartContainer for \"08faa92222142a5b39c96fc8437836cdeeec6c3a8a374cc9b9a611d842c24164\"" May 13 08:30:22.851742 env[1258]: time="2025-05-13T08:30:22.851687484Z" level=info msg="StartContainer for \"08faa92222142a5b39c96fc8437836cdeeec6c3a8a374cc9b9a611d842c24164\" returns successfully" May 13 08:30:22.934149 systemd-networkd[1044]: lxc912e4bc495dd: Gained IPv6LL May 13 08:30:23.402937 kubelet[1588]: E0513 08:30:23.402865 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:24.404254 kubelet[1588]: E0513 08:30:24.404183 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:25.405336 kubelet[1588]: E0513 08:30:25.405254 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:26.406772 kubelet[1588]: E0513 08:30:26.406676 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:27.407634 kubelet[1588]: E0513 08:30:27.407510 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:28.409811 kubelet[1588]: E0513 08:30:28.409370 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:29.411050 kubelet[1588]: E0513 08:30:29.410914 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:30.333912 kubelet[1588]: E0513 08:30:30.333771 1588 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:30.412189 kubelet[1588]: E0513 08:30:30.412039 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:31.413933 kubelet[1588]: E0513 08:30:31.413821 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:31.748000 kubelet[1588]: I0513 08:30:31.746716 1588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=28.245224065 podStartE2EDuration="28.746537044s" podCreationTimestamp="2025-05-13 08:30:03 +0000 UTC" firstStartedPulling="2025-05-13 08:30:22.204984445 +0000 UTC m=+73.164138151" lastFinishedPulling="2025-05-13 08:30:22.706297374 +0000 UTC m=+73.665451130" observedRunningTime="2025-05-13 08:30:22.975126465 +0000 UTC m=+73.934280221" watchObservedRunningTime="2025-05-13 08:30:31.746537044 +0000 UTC m=+82.705690810" May 13 08:30:31.864887 systemd[1]: run-containerd-runc-k8s.io-0f47c897af31e487c90b9ae080259c542bf7e1d455f155185bdfc7af1095da0b-runc.0ynTeX.mount: Deactivated successfully. May 13 08:30:31.919692 env[1258]: time="2025-05-13T08:30:31.918058447Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 08:30:31.941746 env[1258]: time="2025-05-13T08:30:31.941684235Z" level=info msg="StopContainer for \"0f47c897af31e487c90b9ae080259c542bf7e1d455f155185bdfc7af1095da0b\" with timeout 2 (s)" May 13 08:30:31.942392 env[1258]: time="2025-05-13T08:30:31.942302789Z" level=info msg="Stop container \"0f47c897af31e487c90b9ae080259c542bf7e1d455f155185bdfc7af1095da0b\" with signal terminated" May 13 08:30:31.966086 systemd-networkd[1044]: lxc_health: Link DOWN May 13 08:30:31.966099 systemd-networkd[1044]: lxc_health: Lost carrier May 13 08:30:32.050750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f47c897af31e487c90b9ae080259c542bf7e1d455f155185bdfc7af1095da0b-rootfs.mount: Deactivated successfully. May 13 08:30:32.417578 kubelet[1588]: E0513 08:30:32.417479 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:32.886040 env[1258]: time="2025-05-13T08:30:32.885762056Z" level=info msg="shim disconnected" id=0f47c897af31e487c90b9ae080259c542bf7e1d455f155185bdfc7af1095da0b May 13 08:30:32.886771 env[1258]: time="2025-05-13T08:30:32.886685168Z" level=warning msg="cleaning up after shim disconnected" id=0f47c897af31e487c90b9ae080259c542bf7e1d455f155185bdfc7af1095da0b namespace=k8s.io May 13 08:30:32.886771 env[1258]: time="2025-05-13T08:30:32.886761460Z" level=info msg="cleaning up dead shim" May 13 08:30:32.939348 env[1258]: time="2025-05-13T08:30:32.939161861Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:30:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3051 runtime=io.containerd.runc.v2\n" May 13 08:30:32.953083 env[1258]: time="2025-05-13T08:30:32.952959760Z" level=info msg="StopContainer for \"0f47c897af31e487c90b9ae080259c542bf7e1d455f155185bdfc7af1095da0b\" returns successfully" May 13 08:30:32.956014 env[1258]: time="2025-05-13T08:30:32.955906438Z" level=info msg="StopPodSandbox for \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\"" May 13 08:30:32.956476 env[1258]: time="2025-05-13T08:30:32.956397364Z" level=info msg="Container to stop \"6021390bf4db312822ed8de0cc84627cc78f9e0429c9a6bbb2c85029ee41ff2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:30:32.956646 env[1258]: time="2025-05-13T08:30:32.956470730Z" level=info msg="Container to stop \"0f47c897af31e487c90b9ae080259c542bf7e1d455f155185bdfc7af1095da0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:30:32.956646 env[1258]: time="2025-05-13T08:30:32.956508231Z" level=info msg="Container to stop \"1193d17f867cc78dfb1c732b907076d918618aabb86f4e548441495f4d23628b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:30:32.956646 env[1258]: time="2025-05-13T08:30:32.956542424Z" level=info msg="Container to stop \"4a90604bf3776f0989f13cc441cc6385865de56c5a3ff5f7db5f843db3f33667\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:30:32.956646 env[1258]: time="2025-05-13T08:30:32.956574314Z" level=info msg="Container to stop \"fdc369ab8b891ee49fb3c7040ea6b1457ea6f461f2d3b2e7fe03bd7aef399ef5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:30:32.963136 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5-shm.mount: Deactivated successfully. May 13 08:30:33.039837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5-rootfs.mount: Deactivated successfully. May 13 08:30:33.046872 env[1258]: time="2025-05-13T08:30:33.046821394Z" level=info msg="shim disconnected" id=d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5 May 13 08:30:33.047481 env[1258]: time="2025-05-13T08:30:33.047459655Z" level=warning msg="cleaning up after shim disconnected" id=d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5 namespace=k8s.io May 13 08:30:33.047567 env[1258]: time="2025-05-13T08:30:33.047550424Z" level=info msg="cleaning up dead shim" May 13 08:30:33.059911 env[1258]: time="2025-05-13T08:30:33.059839711Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:30:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3085 runtime=io.containerd.runc.v2\n" May 13 08:30:33.060800 env[1258]: time="2025-05-13T08:30:33.060757844Z" level=info msg="TearDown network for sandbox \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\" successfully" May 13 08:30:33.060962 env[1258]: time="2025-05-13T08:30:33.060925637Z" level=info msg="StopPodSandbox for \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\" returns successfully" May 13 08:30:33.225529 kubelet[1588]: I0513 08:30:33.222728 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-cni-path\") pod \"f40dee87-0d88-4059-9d72-333ed813361c\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " May 13 08:30:33.225529 kubelet[1588]: I0513 08:30:33.222870 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-lib-modules\") pod \"f40dee87-0d88-4059-9d72-333ed813361c\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " May 13 08:30:33.225529 kubelet[1588]: I0513 08:30:33.223078 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-xtables-lock\") pod \"f40dee87-0d88-4059-9d72-333ed813361c\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " May 13 08:30:33.225529 kubelet[1588]: I0513 08:30:33.223175 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-cilium-run\") pod \"f40dee87-0d88-4059-9d72-333ed813361c\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " May 13 08:30:33.225529 kubelet[1588]: I0513 08:30:33.223305 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f40dee87-0d88-4059-9d72-333ed813361c-clustermesh-secrets\") pod \"f40dee87-0d88-4059-9d72-333ed813361c\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " May 13 08:30:33.225529 kubelet[1588]: I0513 08:30:33.223427 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f40dee87-0d88-4059-9d72-333ed813361c-cilium-config-path\") pod \"f40dee87-0d88-4059-9d72-333ed813361c\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " May 13 08:30:33.226896 kubelet[1588]: I0513 08:30:33.223503 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-hostproc\") pod \"f40dee87-0d88-4059-9d72-333ed813361c\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " May 13 08:30:33.226896 kubelet[1588]: I0513 08:30:33.223715 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f40dee87-0d88-4059-9d72-333ed813361c-hubble-tls\") pod \"f40dee87-0d88-4059-9d72-333ed813361c\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " May 13 08:30:33.226896 kubelet[1588]: I0513 08:30:33.223809 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-cilium-cgroup\") pod \"f40dee87-0d88-4059-9d72-333ed813361c\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " May 13 08:30:33.226896 kubelet[1588]: I0513 08:30:33.223887 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-etc-cni-netd\") pod \"f40dee87-0d88-4059-9d72-333ed813361c\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " May 13 08:30:33.226896 kubelet[1588]: I0513 08:30:33.223958 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-bpf-maps\") pod \"f40dee87-0d88-4059-9d72-333ed813361c\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " May 13 08:30:33.226896 kubelet[1588]: I0513 08:30:33.224059 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-host-proc-sys-net\") pod \"f40dee87-0d88-4059-9d72-333ed813361c\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " May 13 08:30:33.227883 kubelet[1588]: I0513 08:30:33.224114 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-host-proc-sys-kernel\") pod \"f40dee87-0d88-4059-9d72-333ed813361c\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " May 13 08:30:33.227883 kubelet[1588]: I0513 08:30:33.224183 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbpz7\" (UniqueName: \"kubernetes.io/projected/f40dee87-0d88-4059-9d72-333ed813361c-kube-api-access-zbpz7\") pod \"f40dee87-0d88-4059-9d72-333ed813361c\" (UID: \"f40dee87-0d88-4059-9d72-333ed813361c\") " May 13 08:30:33.228439 kubelet[1588]: I0513 08:30:33.228343 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-hostproc" (OuterVolumeSpecName: "hostproc") pod "f40dee87-0d88-4059-9d72-333ed813361c" (UID: "f40dee87-0d88-4059-9d72-333ed813361c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:33.228882 kubelet[1588]: I0513 08:30:33.228815 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-cni-path" (OuterVolumeSpecName: "cni-path") pod "f40dee87-0d88-4059-9d72-333ed813361c" (UID: "f40dee87-0d88-4059-9d72-333ed813361c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:33.229235 kubelet[1588]: I0513 08:30:33.229178 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f40dee87-0d88-4059-9d72-333ed813361c" (UID: "f40dee87-0d88-4059-9d72-333ed813361c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:33.229562 kubelet[1588]: I0513 08:30:33.229507 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f40dee87-0d88-4059-9d72-333ed813361c" (UID: "f40dee87-0d88-4059-9d72-333ed813361c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:33.229978 kubelet[1588]: I0513 08:30:33.229922 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f40dee87-0d88-4059-9d72-333ed813361c" (UID: "f40dee87-0d88-4059-9d72-333ed813361c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:33.236152 kubelet[1588]: I0513 08:30:33.235932 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f40dee87-0d88-4059-9d72-333ed813361c" (UID: "f40dee87-0d88-4059-9d72-333ed813361c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:33.236980 kubelet[1588]: I0513 08:30:33.236177 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f40dee87-0d88-4059-9d72-333ed813361c" (UID: "f40dee87-0d88-4059-9d72-333ed813361c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:33.236980 kubelet[1588]: I0513 08:30:33.236329 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f40dee87-0d88-4059-9d72-333ed813361c" (UID: "f40dee87-0d88-4059-9d72-333ed813361c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:33.236980 kubelet[1588]: I0513 08:30:33.236442 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f40dee87-0d88-4059-9d72-333ed813361c" (UID: "f40dee87-0d88-4059-9d72-333ed813361c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:33.236980 kubelet[1588]: I0513 08:30:33.236683 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f40dee87-0d88-4059-9d72-333ed813361c" (UID: "f40dee87-0d88-4059-9d72-333ed813361c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:33.259510 systemd[1]: var-lib-kubelet-pods-f40dee87\x2d0d88\x2d4059\x2d9d72\x2d333ed813361c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzbpz7.mount: Deactivated successfully. May 13 08:30:33.265960 kubelet[1588]: I0513 08:30:33.265845 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f40dee87-0d88-4059-9d72-333ed813361c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f40dee87-0d88-4059-9d72-333ed813361c" (UID: "f40dee87-0d88-4059-9d72-333ed813361c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 08:30:33.266280 kubelet[1588]: I0513 08:30:33.266221 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f40dee87-0d88-4059-9d72-333ed813361c-kube-api-access-zbpz7" (OuterVolumeSpecName: "kube-api-access-zbpz7") pod "f40dee87-0d88-4059-9d72-333ed813361c" (UID: "f40dee87-0d88-4059-9d72-333ed813361c"). InnerVolumeSpecName "kube-api-access-zbpz7". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 08:30:33.272984 systemd[1]: var-lib-kubelet-pods-f40dee87\x2d0d88\x2d4059\x2d9d72\x2d333ed813361c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 08:30:33.289212 kubelet[1588]: I0513 08:30:33.289028 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f40dee87-0d88-4059-9d72-333ed813361c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f40dee87-0d88-4059-9d72-333ed813361c" (UID: "f40dee87-0d88-4059-9d72-333ed813361c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 08:30:33.290687 kubelet[1588]: I0513 08:30:33.290546 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f40dee87-0d88-4059-9d72-333ed813361c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f40dee87-0d88-4059-9d72-333ed813361c" (UID: "f40dee87-0d88-4059-9d72-333ed813361c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 08:30:33.291548 systemd[1]: var-lib-kubelet-pods-f40dee87\x2d0d88\x2d4059\x2d9d72\x2d333ed813361c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 08:30:33.325333 kubelet[1588]: I0513 08:30:33.325209 1588 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-bpf-maps\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:33.326191 kubelet[1588]: I0513 08:30:33.325983 1588 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-host-proc-sys-net\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:33.326191 kubelet[1588]: I0513 08:30:33.326165 1588 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-host-proc-sys-kernel\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:33.326539 kubelet[1588]: I0513 08:30:33.326216 1588 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zbpz7\" (UniqueName: \"kubernetes.io/projected/f40dee87-0d88-4059-9d72-333ed813361c-kube-api-access-zbpz7\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:33.326539 kubelet[1588]: I0513 08:30:33.326260 1588 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-cni-path\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:33.326539 kubelet[1588]: I0513 08:30:33.326304 1588 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-lib-modules\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:33.326539 kubelet[1588]: I0513 08:30:33.326345 1588 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-xtables-lock\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:33.326539 kubelet[1588]: I0513 08:30:33.326385 1588 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-cilium-run\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:33.326539 kubelet[1588]: I0513 08:30:33.326426 1588 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f40dee87-0d88-4059-9d72-333ed813361c-clustermesh-secrets\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:33.326539 kubelet[1588]: I0513 08:30:33.326472 1588 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f40dee87-0d88-4059-9d72-333ed813361c-cilium-config-path\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:33.327824 kubelet[1588]: I0513 08:30:33.326578 1588 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-hostproc\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:33.327824 kubelet[1588]: I0513 08:30:33.326715 1588 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f40dee87-0d88-4059-9d72-333ed813361c-hubble-tls\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:33.327824 kubelet[1588]: I0513 08:30:33.326761 1588 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-cilium-cgroup\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:33.327824 kubelet[1588]: I0513 08:30:33.326821 1588 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f40dee87-0d88-4059-9d72-333ed813361c-etc-cni-netd\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:33.419076 kubelet[1588]: E0513 08:30:33.418933 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:34.020342 kubelet[1588]: I0513 08:30:34.020282 1588 scope.go:117] "RemoveContainer" containerID="0f47c897af31e487c90b9ae080259c542bf7e1d455f155185bdfc7af1095da0b" May 13 08:30:34.026554 env[1258]: time="2025-05-13T08:30:34.026368776Z" level=info msg="RemoveContainer for \"0f47c897af31e487c90b9ae080259c542bf7e1d455f155185bdfc7af1095da0b\"" May 13 08:30:34.033504 env[1258]: time="2025-05-13T08:30:34.033444027Z" level=info msg="RemoveContainer for \"0f47c897af31e487c90b9ae080259c542bf7e1d455f155185bdfc7af1095da0b\" returns successfully" May 13 08:30:34.034124 kubelet[1588]: I0513 08:30:34.034101 1588 scope.go:117] "RemoveContainer" containerID="fdc369ab8b891ee49fb3c7040ea6b1457ea6f461f2d3b2e7fe03bd7aef399ef5" May 13 08:30:34.035863 env[1258]: time="2025-05-13T08:30:34.035820522Z" level=info msg="RemoveContainer for \"fdc369ab8b891ee49fb3c7040ea6b1457ea6f461f2d3b2e7fe03bd7aef399ef5\"" May 13 08:30:34.041293 env[1258]: time="2025-05-13T08:30:34.041208043Z" level=info msg="RemoveContainer for \"fdc369ab8b891ee49fb3c7040ea6b1457ea6f461f2d3b2e7fe03bd7aef399ef5\" returns successfully" May 13 08:30:34.041828 kubelet[1588]: I0513 08:30:34.041798 1588 scope.go:117] "RemoveContainer" containerID="6021390bf4db312822ed8de0cc84627cc78f9e0429c9a6bbb2c85029ee41ff2f" May 13 08:30:34.046836 env[1258]: time="2025-05-13T08:30:34.046750254Z" level=info msg="RemoveContainer for \"6021390bf4db312822ed8de0cc84627cc78f9e0429c9a6bbb2c85029ee41ff2f\"" May 13 08:30:34.053075 env[1258]: time="2025-05-13T08:30:34.053030802Z" level=info msg="RemoveContainer for \"6021390bf4db312822ed8de0cc84627cc78f9e0429c9a6bbb2c85029ee41ff2f\" returns successfully" May 13 08:30:34.053685 kubelet[1588]: I0513 08:30:34.053625 1588 scope.go:117] "RemoveContainer" containerID="4a90604bf3776f0989f13cc441cc6385865de56c5a3ff5f7db5f843db3f33667" May 13 08:30:34.055816 env[1258]: time="2025-05-13T08:30:34.055769232Z" level=info msg="RemoveContainer for \"4a90604bf3776f0989f13cc441cc6385865de56c5a3ff5f7db5f843db3f33667\"" May 13 08:30:34.064572 env[1258]: time="2025-05-13T08:30:34.064433949Z" level=info msg="RemoveContainer for \"4a90604bf3776f0989f13cc441cc6385865de56c5a3ff5f7db5f843db3f33667\" returns successfully" May 13 08:30:34.073515 kubelet[1588]: I0513 08:30:34.073411 1588 scope.go:117] "RemoveContainer" containerID="1193d17f867cc78dfb1c732b907076d918618aabb86f4e548441495f4d23628b" May 13 08:30:34.078053 env[1258]: time="2025-05-13T08:30:34.077650208Z" level=info msg="RemoveContainer for \"1193d17f867cc78dfb1c732b907076d918618aabb86f4e548441495f4d23628b\"" May 13 08:30:34.082397 env[1258]: time="2025-05-13T08:30:34.082353583Z" level=info msg="RemoveContainer for \"1193d17f867cc78dfb1c732b907076d918618aabb86f4e548441495f4d23628b\" returns successfully" May 13 08:30:34.419770 kubelet[1588]: E0513 08:30:34.419713 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:34.545913 kubelet[1588]: I0513 08:30:34.545668 1588 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f40dee87-0d88-4059-9d72-333ed813361c" path="/var/lib/kubelet/pods/f40dee87-0d88-4059-9d72-333ed813361c/volumes" May 13 08:30:35.420863 kubelet[1588]: E0513 08:30:35.420748 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:35.490135 kubelet[1588]: E0513 08:30:35.489823 1588 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 08:30:36.423382 kubelet[1588]: E0513 08:30:36.423272 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:36.660986 kubelet[1588]: I0513 08:30:36.660834 1588 topology_manager.go:215] "Topology Admit Handler" podUID="a081643d-f649-46ef-a97e-92cd49ece2a8" podNamespace="kube-system" podName="cilium-4qqct" May 13 08:30:36.661706 kubelet[1588]: E0513 08:30:36.661668 1588 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f40dee87-0d88-4059-9d72-333ed813361c" containerName="apply-sysctl-overwrites" May 13 08:30:36.661981 kubelet[1588]: E0513 08:30:36.661946 1588 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f40dee87-0d88-4059-9d72-333ed813361c" containerName="clean-cilium-state" May 13 08:30:36.662298 kubelet[1588]: E0513 08:30:36.662205 1588 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f40dee87-0d88-4059-9d72-333ed813361c" containerName="cilium-agent" May 13 08:30:36.662510 kubelet[1588]: E0513 08:30:36.662479 1588 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f40dee87-0d88-4059-9d72-333ed813361c" containerName="mount-cgroup" May 13 08:30:36.662760 kubelet[1588]: E0513 08:30:36.662727 1588 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f40dee87-0d88-4059-9d72-333ed813361c" containerName="mount-bpf-fs" May 13 08:30:36.663169 kubelet[1588]: I0513 08:30:36.663125 1588 memory_manager.go:354] "RemoveStaleState removing state" podUID="f40dee87-0d88-4059-9d72-333ed813361c" containerName="cilium-agent" May 13 08:30:36.677466 kubelet[1588]: I0513 08:30:36.677142 1588 topology_manager.go:215] "Topology Admit Handler" podUID="b5374b9a-6a19-4d75-b103-17181d22116d" podNamespace="kube-system" podName="cilium-operator-599987898-lntkw" May 13 08:30:36.757045 kubelet[1588]: I0513 08:30:36.756953 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-cilium-run\") pod \"cilium-4qqct\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " pod="kube-system/cilium-4qqct" May 13 08:30:36.757704 kubelet[1588]: I0513 08:30:36.757658 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-cni-path\") pod \"cilium-4qqct\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " pod="kube-system/cilium-4qqct" May 13 08:30:36.758077 kubelet[1588]: I0513 08:30:36.757961 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-etc-cni-netd\") pod \"cilium-4qqct\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " pod="kube-system/cilium-4qqct" May 13 08:30:36.758342 kubelet[1588]: I0513 08:30:36.758302 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-xtables-lock\") pod \"cilium-4qqct\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " pod="kube-system/cilium-4qqct" May 13 08:30:36.759000 kubelet[1588]: I0513 08:30:36.758667 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a081643d-f649-46ef-a97e-92cd49ece2a8-cilium-ipsec-secrets\") pod \"cilium-4qqct\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " pod="kube-system/cilium-4qqct" May 13 08:30:36.759173 kubelet[1588]: I0513 08:30:36.759043 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-host-proc-sys-kernel\") pod \"cilium-4qqct\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " pod="kube-system/cilium-4qqct" May 13 08:30:36.759333 kubelet[1588]: I0513 08:30:36.759243 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsf8h\" (UniqueName: \"kubernetes.io/projected/a081643d-f649-46ef-a97e-92cd49ece2a8-kube-api-access-xsf8h\") pod \"cilium-4qqct\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " pod="kube-system/cilium-4qqct" May 13 08:30:36.759552 kubelet[1588]: I0513 08:30:36.759410 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5374b9a-6a19-4d75-b103-17181d22116d-cilium-config-path\") pod \"cilium-operator-599987898-lntkw\" (UID: \"b5374b9a-6a19-4d75-b103-17181d22116d\") " pod="kube-system/cilium-operator-599987898-lntkw" May 13 08:30:36.759757 kubelet[1588]: I0513 08:30:36.759671 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-bpf-maps\") pod \"cilium-4qqct\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " pod="kube-system/cilium-4qqct" May 13 08:30:36.759900 kubelet[1588]: I0513 08:30:36.759856 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-cilium-cgroup\") pod \"cilium-4qqct\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " pod="kube-system/cilium-4qqct" May 13 08:30:36.760021 kubelet[1588]: I0513 08:30:36.759965 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-host-proc-sys-net\") pod \"cilium-4qqct\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " pod="kube-system/cilium-4qqct" May 13 08:30:36.760321 kubelet[1588]: I0513 08:30:36.760258 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a081643d-f649-46ef-a97e-92cd49ece2a8-hubble-tls\") pod \"cilium-4qqct\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " pod="kube-system/cilium-4qqct" May 13 08:30:36.761289 kubelet[1588]: I0513 08:30:36.761176 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxrnx\" (UniqueName: \"kubernetes.io/projected/b5374b9a-6a19-4d75-b103-17181d22116d-kube-api-access-hxrnx\") pod \"cilium-operator-599987898-lntkw\" (UID: \"b5374b9a-6a19-4d75-b103-17181d22116d\") " pod="kube-system/cilium-operator-599987898-lntkw" May 13 08:30:36.761824 kubelet[1588]: I0513 08:30:36.761749 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-hostproc\") pod \"cilium-4qqct\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " pod="kube-system/cilium-4qqct" May 13 08:30:36.762004 kubelet[1588]: I0513 08:30:36.761905 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-lib-modules\") pod \"cilium-4qqct\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " pod="kube-system/cilium-4qqct" May 13 08:30:36.762105 kubelet[1588]: I0513 08:30:36.762048 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a081643d-f649-46ef-a97e-92cd49ece2a8-clustermesh-secrets\") pod \"cilium-4qqct\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " pod="kube-system/cilium-4qqct" May 13 08:30:36.762263 kubelet[1588]: I0513 08:30:36.762212 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a081643d-f649-46ef-a97e-92cd49ece2a8-cilium-config-path\") pod \"cilium-4qqct\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " pod="kube-system/cilium-4qqct" May 13 08:30:36.976609 env[1258]: time="2025-05-13T08:30:36.976385233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4qqct,Uid:a081643d-f649-46ef-a97e-92cd49ece2a8,Namespace:kube-system,Attempt:0,}" May 13 08:30:36.985353 env[1258]: time="2025-05-13T08:30:36.984983027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-lntkw,Uid:b5374b9a-6a19-4d75-b103-17181d22116d,Namespace:kube-system,Attempt:0,}" May 13 08:30:37.006498 env[1258]: time="2025-05-13T08:30:37.006412890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:30:37.006770 env[1258]: time="2025-05-13T08:30:37.006504942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:30:37.006770 env[1258]: time="2025-05-13T08:30:37.006537652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:30:37.006902 env[1258]: time="2025-05-13T08:30:37.006755859Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8bbbfffef080cee10db03ef40f0c72a0ef7e34074a244a9818e9571cdfc2f3e3 pid=3113 runtime=io.containerd.runc.v2 May 13 08:30:37.010360 env[1258]: time="2025-05-13T08:30:37.010281009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:30:37.010531 env[1258]: time="2025-05-13T08:30:37.010363542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:30:37.010531 env[1258]: time="2025-05-13T08:30:37.010402135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:30:37.010786 env[1258]: time="2025-05-13T08:30:37.010744284Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e5a3a8dbcaf5d9392da94ea5d708553fbf52c7dfe11cda7b10dedc612e3fc0f pid=3127 runtime=io.containerd.runc.v2 May 13 08:30:37.079571 env[1258]: time="2025-05-13T08:30:37.079507387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4qqct,Uid:a081643d-f649-46ef-a97e-92cd49ece2a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bbbfffef080cee10db03ef40f0c72a0ef7e34074a244a9818e9571cdfc2f3e3\"" May 13 08:30:37.084353 env[1258]: time="2025-05-13T08:30:37.084302136Z" level=info msg="CreateContainer within sandbox \"8bbbfffef080cee10db03ef40f0c72a0ef7e34074a244a9818e9571cdfc2f3e3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 08:30:37.111686 env[1258]: time="2025-05-13T08:30:37.106210553Z" level=info msg="CreateContainer within sandbox \"8bbbfffef080cee10db03ef40f0c72a0ef7e34074a244a9818e9571cdfc2f3e3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"10afd1b7e77c23e164bd7c6cb77870c7d6d4a09e11aaf8324850fc187fe2dc40\"" May 13 08:30:37.111686 env[1258]: time="2025-05-13T08:30:37.107267776Z" level=info msg="StartContainer for \"10afd1b7e77c23e164bd7c6cb77870c7d6d4a09e11aaf8324850fc187fe2dc40\"" May 13 08:30:37.187334 env[1258]: time="2025-05-13T08:30:37.187274724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-lntkw,Uid:b5374b9a-6a19-4d75-b103-17181d22116d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e5a3a8dbcaf5d9392da94ea5d708553fbf52c7dfe11cda7b10dedc612e3fc0f\"" May 13 08:30:37.190276 env[1258]: time="2025-05-13T08:30:37.190160530Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 08:30:37.206627 env[1258]: time="2025-05-13T08:30:37.205008320Z" level=info msg="StartContainer for \"10afd1b7e77c23e164bd7c6cb77870c7d6d4a09e11aaf8324850fc187fe2dc40\" returns successfully" May 13 08:30:37.249801 env[1258]: time="2025-05-13T08:30:37.249656037Z" level=info msg="shim disconnected" id=10afd1b7e77c23e164bd7c6cb77870c7d6d4a09e11aaf8324850fc187fe2dc40 May 13 08:30:37.250102 env[1258]: time="2025-05-13T08:30:37.250058478Z" level=warning msg="cleaning up after shim disconnected" id=10afd1b7e77c23e164bd7c6cb77870c7d6d4a09e11aaf8324850fc187fe2dc40 namespace=k8s.io May 13 08:30:37.250224 env[1258]: time="2025-05-13T08:30:37.250206444Z" level=info msg="cleaning up dead shim" May 13 08:30:37.260160 env[1258]: time="2025-05-13T08:30:37.260107713Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:30:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3239 runtime=io.containerd.runc.v2\n" May 13 08:30:37.424683 kubelet[1588]: E0513 08:30:37.424488 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:38.076305 env[1258]: time="2025-05-13T08:30:38.075335223Z" level=info msg="CreateContainer within sandbox \"8bbbfffef080cee10db03ef40f0c72a0ef7e34074a244a9818e9571cdfc2f3e3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 08:30:38.116287 env[1258]: time="2025-05-13T08:30:38.116114556Z" level=info msg="CreateContainer within sandbox \"8bbbfffef080cee10db03ef40f0c72a0ef7e34074a244a9818e9571cdfc2f3e3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dfe6b14920e6a2353a285b3f0421e0907f923d16f61f784d66a2db367e7591f8\"" May 13 08:30:38.118300 env[1258]: time="2025-05-13T08:30:38.118206922Z" level=info msg="StartContainer for \"dfe6b14920e6a2353a285b3f0421e0907f923d16f61f784d66a2db367e7591f8\"" May 13 08:30:38.219620 env[1258]: time="2025-05-13T08:30:38.219538233Z" level=info msg="StartContainer for \"dfe6b14920e6a2353a285b3f0421e0907f923d16f61f784d66a2db367e7591f8\" returns successfully" May 13 08:30:38.256618 env[1258]: time="2025-05-13T08:30:38.256501734Z" level=info msg="shim disconnected" id=dfe6b14920e6a2353a285b3f0421e0907f923d16f61f784d66a2db367e7591f8 May 13 08:30:38.256936 env[1258]: time="2025-05-13T08:30:38.256900328Z" level=warning msg="cleaning up after shim disconnected" id=dfe6b14920e6a2353a285b3f0421e0907f923d16f61f784d66a2db367e7591f8 namespace=k8s.io May 13 08:30:38.257039 env[1258]: time="2025-05-13T08:30:38.257020693Z" level=info msg="cleaning up dead shim" May 13 08:30:38.265650 env[1258]: time="2025-05-13T08:30:38.265570120Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:30:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3304 runtime=io.containerd.runc.v2\n" May 13 08:30:38.425741 kubelet[1588]: E0513 08:30:38.425453 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:38.880495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfe6b14920e6a2353a285b3f0421e0907f923d16f61f784d66a2db367e7591f8-rootfs.mount: Deactivated successfully. May 13 08:30:39.077819 env[1258]: time="2025-05-13T08:30:39.077737946Z" level=info msg="StopPodSandbox for \"8bbbfffef080cee10db03ef40f0c72a0ef7e34074a244a9818e9571cdfc2f3e3\"" May 13 08:30:39.080752 env[1258]: time="2025-05-13T08:30:39.077876104Z" level=info msg="Container to stop \"10afd1b7e77c23e164bd7c6cb77870c7d6d4a09e11aaf8324850fc187fe2dc40\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:30:39.080752 env[1258]: time="2025-05-13T08:30:39.077914827Z" level=info msg="Container to stop \"dfe6b14920e6a2353a285b3f0421e0907f923d16f61f784d66a2db367e7591f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 08:30:39.084390 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8bbbfffef080cee10db03ef40f0c72a0ef7e34074a244a9818e9571cdfc2f3e3-shm.mount: Deactivated successfully. May 13 08:30:39.188546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bbbfffef080cee10db03ef40f0c72a0ef7e34074a244a9818e9571cdfc2f3e3-rootfs.mount: Deactivated successfully. May 13 08:30:39.257810 env[1258]: time="2025-05-13T08:30:39.257690960Z" level=info msg="shim disconnected" id=8bbbfffef080cee10db03ef40f0c72a0ef7e34074a244a9818e9571cdfc2f3e3 May 13 08:30:39.257810 env[1258]: time="2025-05-13T08:30:39.257808429Z" level=warning msg="cleaning up after shim disconnected" id=8bbbfffef080cee10db03ef40f0c72a0ef7e34074a244a9818e9571cdfc2f3e3 namespace=k8s.io May 13 08:30:39.258284 env[1258]: time="2025-05-13T08:30:39.257834889Z" level=info msg="cleaning up dead shim" May 13 08:30:39.296847 env[1258]: time="2025-05-13T08:30:39.296742455Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:30:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3339 runtime=io.containerd.runc.v2\n" May 13 08:30:39.297773 env[1258]: time="2025-05-13T08:30:39.297695723Z" level=info msg="TearDown network for sandbox \"8bbbfffef080cee10db03ef40f0c72a0ef7e34074a244a9818e9571cdfc2f3e3\" successfully" May 13 08:30:39.297934 env[1258]: time="2025-05-13T08:30:39.297772958Z" level=info msg="StopPodSandbox for \"8bbbfffef080cee10db03ef40f0c72a0ef7e34074a244a9818e9571cdfc2f3e3\" returns successfully" May 13 08:30:39.402587 kubelet[1588]: I0513 08:30:39.402506 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a081643d-f649-46ef-a97e-92cd49ece2a8-cilium-ipsec-secrets\") pod \"a081643d-f649-46ef-a97e-92cd49ece2a8\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " May 13 08:30:39.402587 kubelet[1588]: I0513 08:30:39.402573 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a081643d-f649-46ef-a97e-92cd49ece2a8-clustermesh-secrets\") pod \"a081643d-f649-46ef-a97e-92cd49ece2a8\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " May 13 08:30:39.402587 kubelet[1588]: I0513 08:30:39.402625 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-etc-cni-netd\") pod \"a081643d-f649-46ef-a97e-92cd49ece2a8\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " May 13 08:30:39.402587 kubelet[1588]: I0513 08:30:39.402645 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-xtables-lock\") pod \"a081643d-f649-46ef-a97e-92cd49ece2a8\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " May 13 08:30:39.402587 kubelet[1588]: I0513 08:30:39.402675 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a081643d-f649-46ef-a97e-92cd49ece2a8-cilium-config-path\") pod \"a081643d-f649-46ef-a97e-92cd49ece2a8\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " May 13 08:30:39.403396 kubelet[1588]: I0513 08:30:39.402698 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-cni-path\") pod \"a081643d-f649-46ef-a97e-92cd49ece2a8\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " May 13 08:30:39.403396 kubelet[1588]: I0513 08:30:39.402715 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-bpf-maps\") pod \"a081643d-f649-46ef-a97e-92cd49ece2a8\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " May 13 08:30:39.403396 kubelet[1588]: I0513 08:30:39.402743 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-host-proc-sys-net\") pod \"a081643d-f649-46ef-a97e-92cd49ece2a8\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " May 13 08:30:39.403396 kubelet[1588]: I0513 08:30:39.402764 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-cilium-cgroup\") pod \"a081643d-f649-46ef-a97e-92cd49ece2a8\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " May 13 08:30:39.403396 kubelet[1588]: I0513 08:30:39.402783 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a081643d-f649-46ef-a97e-92cd49ece2a8-hubble-tls\") pod \"a081643d-f649-46ef-a97e-92cd49ece2a8\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " May 13 08:30:39.403396 kubelet[1588]: I0513 08:30:39.402805 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-hostproc\") pod \"a081643d-f649-46ef-a97e-92cd49ece2a8\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " May 13 08:30:39.403976 kubelet[1588]: I0513 08:30:39.402824 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-cilium-run\") pod \"a081643d-f649-46ef-a97e-92cd49ece2a8\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " May 13 08:30:39.403976 kubelet[1588]: I0513 08:30:39.402858 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-host-proc-sys-kernel\") pod \"a081643d-f649-46ef-a97e-92cd49ece2a8\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " May 13 08:30:39.403976 kubelet[1588]: I0513 08:30:39.402878 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsf8h\" (UniqueName: \"kubernetes.io/projected/a081643d-f649-46ef-a97e-92cd49ece2a8-kube-api-access-xsf8h\") pod \"a081643d-f649-46ef-a97e-92cd49ece2a8\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " May 13 08:30:39.403976 kubelet[1588]: I0513 08:30:39.402896 1588 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-lib-modules\") pod \"a081643d-f649-46ef-a97e-92cd49ece2a8\" (UID: \"a081643d-f649-46ef-a97e-92cd49ece2a8\") " May 13 08:30:39.403976 kubelet[1588]: I0513 08:30:39.402961 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a081643d-f649-46ef-a97e-92cd49ece2a8" (UID: "a081643d-f649-46ef-a97e-92cd49ece2a8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:39.407813 systemd[1]: var-lib-kubelet-pods-a081643d\x2df649\x2d46ef\x2da97e\x2d92cd49ece2a8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 13 08:30:39.412399 kubelet[1588]: I0513 08:30:39.412340 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a081643d-f649-46ef-a97e-92cd49ece2a8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "a081643d-f649-46ef-a97e-92cd49ece2a8" (UID: "a081643d-f649-46ef-a97e-92cd49ece2a8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 08:30:39.412494 kubelet[1588]: I0513 08:30:39.412448 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a081643d-f649-46ef-a97e-92cd49ece2a8" (UID: "a081643d-f649-46ef-a97e-92cd49ece2a8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:39.412494 kubelet[1588]: I0513 08:30:39.412473 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a081643d-f649-46ef-a97e-92cd49ece2a8" (UID: "a081643d-f649-46ef-a97e-92cd49ece2a8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:39.412494 kubelet[1588]: I0513 08:30:39.412493 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a081643d-f649-46ef-a97e-92cd49ece2a8" (UID: "a081643d-f649-46ef-a97e-92cd49ece2a8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:39.415293 kubelet[1588]: I0513 08:30:39.415254 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a081643d-f649-46ef-a97e-92cd49ece2a8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a081643d-f649-46ef-a97e-92cd49ece2a8" (UID: "a081643d-f649-46ef-a97e-92cd49ece2a8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 08:30:39.415400 kubelet[1588]: I0513 08:30:39.415303 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-cni-path" (OuterVolumeSpecName: "cni-path") pod "a081643d-f649-46ef-a97e-92cd49ece2a8" (UID: "a081643d-f649-46ef-a97e-92cd49ece2a8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:39.415400 kubelet[1588]: I0513 08:30:39.415326 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a081643d-f649-46ef-a97e-92cd49ece2a8" (UID: "a081643d-f649-46ef-a97e-92cd49ece2a8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:39.415400 kubelet[1588]: I0513 08:30:39.415350 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-hostproc" (OuterVolumeSpecName: "hostproc") pod "a081643d-f649-46ef-a97e-92cd49ece2a8" (UID: "a081643d-f649-46ef-a97e-92cd49ece2a8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:39.415400 kubelet[1588]: I0513 08:30:39.415370 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a081643d-f649-46ef-a97e-92cd49ece2a8" (UID: "a081643d-f649-46ef-a97e-92cd49ece2a8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:39.418998 systemd[1]: var-lib-kubelet-pods-a081643d\x2df649\x2d46ef\x2da97e\x2d92cd49ece2a8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 08:30:39.422118 kubelet[1588]: I0513 08:30:39.422013 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a081643d-f649-46ef-a97e-92cd49ece2a8" (UID: "a081643d-f649-46ef-a97e-92cd49ece2a8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:39.422118 kubelet[1588]: I0513 08:30:39.422092 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a081643d-f649-46ef-a97e-92cd49ece2a8" (UID: "a081643d-f649-46ef-a97e-92cd49ece2a8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 08:30:39.422763 kubelet[1588]: I0513 08:30:39.422731 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a081643d-f649-46ef-a97e-92cd49ece2a8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a081643d-f649-46ef-a97e-92cd49ece2a8" (UID: "a081643d-f649-46ef-a97e-92cd49ece2a8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 08:30:39.424812 kubelet[1588]: I0513 08:30:39.424762 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a081643d-f649-46ef-a97e-92cd49ece2a8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a081643d-f649-46ef-a97e-92cd49ece2a8" (UID: "a081643d-f649-46ef-a97e-92cd49ece2a8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 08:30:39.425981 kubelet[1588]: E0513 08:30:39.425934 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:39.427672 kubelet[1588]: I0513 08:30:39.427622 1588 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a081643d-f649-46ef-a97e-92cd49ece2a8-kube-api-access-xsf8h" (OuterVolumeSpecName: "kube-api-access-xsf8h") pod "a081643d-f649-46ef-a97e-92cd49ece2a8" (UID: "a081643d-f649-46ef-a97e-92cd49ece2a8"). InnerVolumeSpecName "kube-api-access-xsf8h". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 08:30:39.506809 kubelet[1588]: I0513 08:30:39.503956 1588 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-etc-cni-netd\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:39.506809 kubelet[1588]: I0513 08:30:39.504102 1588 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-xtables-lock\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:39.506809 kubelet[1588]: I0513 08:30:39.504151 1588 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a081643d-f649-46ef-a97e-92cd49ece2a8-cilium-config-path\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:39.506809 kubelet[1588]: I0513 08:30:39.504227 1588 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-cni-path\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:39.506809 kubelet[1588]: I0513 08:30:39.504251 1588 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-bpf-maps\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:39.506809 kubelet[1588]: I0513 08:30:39.504274 1588 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-host-proc-sys-net\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:39.506809 kubelet[1588]: I0513 08:30:39.504298 1588 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-cilium-cgroup\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:39.506809 kubelet[1588]: I0513 08:30:39.504320 1588 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a081643d-f649-46ef-a97e-92cd49ece2a8-hubble-tls\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:39.507802 kubelet[1588]: I0513 08:30:39.504343 1588 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-hostproc\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:39.507802 kubelet[1588]: I0513 08:30:39.504363 1588 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-cilium-run\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:39.507802 kubelet[1588]: I0513 08:30:39.504384 1588 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-host-proc-sys-kernel\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:39.507802 kubelet[1588]: I0513 08:30:39.504409 1588 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xsf8h\" (UniqueName: \"kubernetes.io/projected/a081643d-f649-46ef-a97e-92cd49ece2a8-kube-api-access-xsf8h\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:39.507802 kubelet[1588]: I0513 08:30:39.504432 1588 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a081643d-f649-46ef-a97e-92cd49ece2a8-lib-modules\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:39.507802 kubelet[1588]: I0513 08:30:39.504453 1588 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a081643d-f649-46ef-a97e-92cd49ece2a8-cilium-ipsec-secrets\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:39.507802 kubelet[1588]: I0513 08:30:39.504475 1588 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a081643d-f649-46ef-a97e-92cd49ece2a8-clustermesh-secrets\") on node \"172.24.4.231\" DevicePath \"\"" May 13 08:30:39.882957 systemd[1]: var-lib-kubelet-pods-a081643d\x2df649\x2d46ef\x2da97e\x2d92cd49ece2a8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxsf8h.mount: Deactivated successfully. May 13 08:30:39.883329 systemd[1]: var-lib-kubelet-pods-a081643d\x2df649\x2d46ef\x2da97e\x2d92cd49ece2a8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 08:30:40.085671 kubelet[1588]: I0513 08:30:40.083931 1588 scope.go:117] "RemoveContainer" containerID="dfe6b14920e6a2353a285b3f0421e0907f923d16f61f784d66a2db367e7591f8" May 13 08:30:40.097520 env[1258]: time="2025-05-13T08:30:40.097408617Z" level=info msg="RemoveContainer for \"dfe6b14920e6a2353a285b3f0421e0907f923d16f61f784d66a2db367e7591f8\"" May 13 08:30:40.115781 env[1258]: time="2025-05-13T08:30:40.115363654Z" level=info msg="RemoveContainer for \"dfe6b14920e6a2353a285b3f0421e0907f923d16f61f784d66a2db367e7591f8\" returns successfully" May 13 08:30:40.117073 kubelet[1588]: I0513 08:30:40.116519 1588 scope.go:117] "RemoveContainer" containerID="10afd1b7e77c23e164bd7c6cb77870c7d6d4a09e11aaf8324850fc187fe2dc40" May 13 08:30:40.118102 env[1258]: time="2025-05-13T08:30:40.118070027Z" level=info msg="RemoveContainer for \"10afd1b7e77c23e164bd7c6cb77870c7d6d4a09e11aaf8324850fc187fe2dc40\"" May 13 08:30:40.128858 env[1258]: time="2025-05-13T08:30:40.128816218Z" level=info msg="RemoveContainer for \"10afd1b7e77c23e164bd7c6cb77870c7d6d4a09e11aaf8324850fc187fe2dc40\" returns successfully" May 13 08:30:40.180752 env[1258]: time="2025-05-13T08:30:40.180079604Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:30:40.183145 env[1258]: time="2025-05-13T08:30:40.183100934Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:30:40.185557 env[1258]: time="2025-05-13T08:30:40.185512356Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 08:30:40.186399 env[1258]: time="2025-05-13T08:30:40.186339730Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 08:30:40.191069 env[1258]: time="2025-05-13T08:30:40.191012352Z" level=info msg="CreateContainer within sandbox \"0e5a3a8dbcaf5d9392da94ea5d708553fbf52c7dfe11cda7b10dedc612e3fc0f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 08:30:40.206185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2387214612.mount: Deactivated successfully. May 13 08:30:40.230627 kubelet[1588]: I0513 08:30:40.228856 1588 topology_manager.go:215] "Topology Admit Handler" podUID="2083634d-d1b6-4ee9-b423-647b9109edd9" podNamespace="kube-system" podName="cilium-drtfb" May 13 08:30:40.230627 kubelet[1588]: E0513 08:30:40.228963 1588 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a081643d-f649-46ef-a97e-92cd49ece2a8" containerName="mount-cgroup" May 13 08:30:40.230627 kubelet[1588]: E0513 08:30:40.228977 1588 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a081643d-f649-46ef-a97e-92cd49ece2a8" containerName="apply-sysctl-overwrites" May 13 08:30:40.230627 kubelet[1588]: I0513 08:30:40.229014 1588 memory_manager.go:354] "RemoveStaleState removing state" podUID="a081643d-f649-46ef-a97e-92cd49ece2a8" containerName="apply-sysctl-overwrites" May 13 08:30:40.232939 env[1258]: time="2025-05-13T08:30:40.232855925Z" level=info msg="CreateContainer within sandbox \"0e5a3a8dbcaf5d9392da94ea5d708553fbf52c7dfe11cda7b10dedc612e3fc0f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9189a191493d1365f95c881dadd64f411701a9c10b06fa84ca6f5030e50f235b\"" May 13 08:30:40.236032 env[1258]: time="2025-05-13T08:30:40.235974226Z" level=info msg="StartContainer for \"9189a191493d1365f95c881dadd64f411701a9c10b06fa84ca6f5030e50f235b\"" May 13 08:30:40.342502 env[1258]: time="2025-05-13T08:30:40.342435764Z" level=info msg="StartContainer for \"9189a191493d1365f95c881dadd64f411701a9c10b06fa84ca6f5030e50f235b\" returns successfully" May 13 08:30:40.415188 kubelet[1588]: I0513 08:30:40.414163 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2083634d-d1b6-4ee9-b423-647b9109edd9-host-proc-sys-net\") pod \"cilium-drtfb\" (UID: \"2083634d-d1b6-4ee9-b423-647b9109edd9\") " pod="kube-system/cilium-drtfb" May 13 08:30:40.415188 kubelet[1588]: I0513 08:30:40.414244 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2083634d-d1b6-4ee9-b423-647b9109edd9-hubble-tls\") pod \"cilium-drtfb\" (UID: \"2083634d-d1b6-4ee9-b423-647b9109edd9\") " pod="kube-system/cilium-drtfb" May 13 08:30:40.415188 kubelet[1588]: I0513 08:30:40.414288 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2083634d-d1b6-4ee9-b423-647b9109edd9-cilium-run\") pod \"cilium-drtfb\" (UID: \"2083634d-d1b6-4ee9-b423-647b9109edd9\") " pod="kube-system/cilium-drtfb" May 13 08:30:40.415188 kubelet[1588]: I0513 08:30:40.414421 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2083634d-d1b6-4ee9-b423-647b9109edd9-clustermesh-secrets\") pod \"cilium-drtfb\" (UID: \"2083634d-d1b6-4ee9-b423-647b9109edd9\") " pod="kube-system/cilium-drtfb" May 13 08:30:40.415188 kubelet[1588]: I0513 08:30:40.414554 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2083634d-d1b6-4ee9-b423-647b9109edd9-host-proc-sys-kernel\") pod \"cilium-drtfb\" (UID: \"2083634d-d1b6-4ee9-b423-647b9109edd9\") " pod="kube-system/cilium-drtfb" May 13 08:30:40.415188 kubelet[1588]: I0513 08:30:40.414661 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2083634d-d1b6-4ee9-b423-647b9109edd9-bpf-maps\") pod \"cilium-drtfb\" (UID: \"2083634d-d1b6-4ee9-b423-647b9109edd9\") " pod="kube-system/cilium-drtfb" May 13 08:30:40.415980 kubelet[1588]: I0513 08:30:40.414718 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2083634d-d1b6-4ee9-b423-647b9109edd9-cilium-cgroup\") pod \"cilium-drtfb\" (UID: \"2083634d-d1b6-4ee9-b423-647b9109edd9\") " pod="kube-system/cilium-drtfb" May 13 08:30:40.415980 kubelet[1588]: I0513 08:30:40.414766 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2083634d-d1b6-4ee9-b423-647b9109edd9-cilium-config-path\") pod \"cilium-drtfb\" (UID: \"2083634d-d1b6-4ee9-b423-647b9109edd9\") " pod="kube-system/cilium-drtfb" May 13 08:30:40.415980 kubelet[1588]: I0513 08:30:40.414860 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2083634d-d1b6-4ee9-b423-647b9109edd9-cilium-ipsec-secrets\") pod \"cilium-drtfb\" (UID: \"2083634d-d1b6-4ee9-b423-647b9109edd9\") " pod="kube-system/cilium-drtfb" May 13 08:30:40.415980 kubelet[1588]: I0513 08:30:40.414953 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2083634d-d1b6-4ee9-b423-647b9109edd9-cni-path\") pod \"cilium-drtfb\" (UID: \"2083634d-d1b6-4ee9-b423-647b9109edd9\") " pod="kube-system/cilium-drtfb" May 13 08:30:40.415980 kubelet[1588]: I0513 08:30:40.415013 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2083634d-d1b6-4ee9-b423-647b9109edd9-etc-cni-netd\") pod \"cilium-drtfb\" (UID: \"2083634d-d1b6-4ee9-b423-647b9109edd9\") " pod="kube-system/cilium-drtfb" May 13 08:30:40.415980 kubelet[1588]: I0513 08:30:40.415082 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2083634d-d1b6-4ee9-b423-647b9109edd9-xtables-lock\") pod \"cilium-drtfb\" (UID: \"2083634d-d1b6-4ee9-b423-647b9109edd9\") " pod="kube-system/cilium-drtfb" May 13 08:30:40.416424 kubelet[1588]: I0513 08:30:40.415147 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq2nl\" (UniqueName: \"kubernetes.io/projected/2083634d-d1b6-4ee9-b423-647b9109edd9-kube-api-access-qq2nl\") pod \"cilium-drtfb\" (UID: \"2083634d-d1b6-4ee9-b423-647b9109edd9\") " pod="kube-system/cilium-drtfb" May 13 08:30:40.416424 kubelet[1588]: I0513 08:30:40.415207 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2083634d-d1b6-4ee9-b423-647b9109edd9-hostproc\") pod \"cilium-drtfb\" (UID: \"2083634d-d1b6-4ee9-b423-647b9109edd9\") " pod="kube-system/cilium-drtfb" May 13 08:30:40.416424 kubelet[1588]: I0513 08:30:40.415233 1588 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2083634d-d1b6-4ee9-b423-647b9109edd9-lib-modules\") pod \"cilium-drtfb\" (UID: \"2083634d-d1b6-4ee9-b423-647b9109edd9\") " pod="kube-system/cilium-drtfb" May 13 08:30:40.426417 kubelet[1588]: E0513 08:30:40.426334 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:40.498688 kubelet[1588]: E0513 08:30:40.498408 1588 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 08:30:40.544984 kubelet[1588]: I0513 08:30:40.544951 1588 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a081643d-f649-46ef-a97e-92cd49ece2a8" path="/var/lib/kubelet/pods/a081643d-f649-46ef-a97e-92cd49ece2a8/volumes" May 13 08:30:40.838554 env[1258]: time="2025-05-13T08:30:40.838263179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-drtfb,Uid:2083634d-d1b6-4ee9-b423-647b9109edd9,Namespace:kube-system,Attempt:0,}" May 13 08:30:40.902985 env[1258]: time="2025-05-13T08:30:40.900617189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 08:30:40.902985 env[1258]: time="2025-05-13T08:30:40.900674836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 08:30:40.902985 env[1258]: time="2025-05-13T08:30:40.900689994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 08:30:40.902985 env[1258]: time="2025-05-13T08:30:40.900845625Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e55d322841bfa54a440b2dd7875fcf1a1b47a98afd1864b962ae5b3dbad44a17 pid=3407 runtime=io.containerd.runc.v2 May 13 08:30:40.964443 env[1258]: time="2025-05-13T08:30:40.964390919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-drtfb,Uid:2083634d-d1b6-4ee9-b423-647b9109edd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"e55d322841bfa54a440b2dd7875fcf1a1b47a98afd1864b962ae5b3dbad44a17\"" May 13 08:30:40.969182 env[1258]: time="2025-05-13T08:30:40.969121289Z" level=info msg="CreateContainer within sandbox \"e55d322841bfa54a440b2dd7875fcf1a1b47a98afd1864b962ae5b3dbad44a17\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 08:30:40.987033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2971541179.mount: Deactivated successfully. May 13 08:30:40.994093 env[1258]: time="2025-05-13T08:30:40.994004581Z" level=info msg="CreateContainer within sandbox \"e55d322841bfa54a440b2dd7875fcf1a1b47a98afd1864b962ae5b3dbad44a17\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4592b8b859750f54d4bb9d0cef214fcdeb311e17ff6782f0b848c030af1865ce\"" May 13 08:30:40.995208 env[1258]: time="2025-05-13T08:30:40.995170848Z" level=info msg="StartContainer for \"4592b8b859750f54d4bb9d0cef214fcdeb311e17ff6782f0b848c030af1865ce\"" May 13 08:30:41.053649 env[1258]: time="2025-05-13T08:30:41.052926171Z" level=info msg="StartContainer for \"4592b8b859750f54d4bb9d0cef214fcdeb311e17ff6782f0b848c030af1865ce\" returns successfully" May 13 08:30:41.295325 kubelet[1588]: I0513 08:30:41.286543 1588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-lntkw" podStartSLOduration=2.287626461 podStartE2EDuration="5.286388533s" podCreationTimestamp="2025-05-13 08:30:36 +0000 UTC" firstStartedPulling="2025-05-13 08:30:37.189274416 +0000 UTC m=+88.148428122" lastFinishedPulling="2025-05-13 08:30:40.188036488 +0000 UTC m=+91.147190194" observedRunningTime="2025-05-13 08:30:41.285586356 +0000 UTC m=+92.244740102" watchObservedRunningTime="2025-05-13 08:30:41.286388533 +0000 UTC m=+92.245542289" May 13 08:30:41.309366 env[1258]: time="2025-05-13T08:30:41.309203046Z" level=info msg="shim disconnected" id=4592b8b859750f54d4bb9d0cef214fcdeb311e17ff6782f0b848c030af1865ce May 13 08:30:41.309366 env[1258]: time="2025-05-13T08:30:41.309364398Z" level=warning msg="cleaning up after shim disconnected" id=4592b8b859750f54d4bb9d0cef214fcdeb311e17ff6782f0b848c030af1865ce namespace=k8s.io May 13 08:30:41.310987 env[1258]: time="2025-05-13T08:30:41.309405323Z" level=info msg="cleaning up dead shim" May 13 08:30:41.349955 env[1258]: time="2025-05-13T08:30:41.349777595Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:30:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3490 runtime=io.containerd.runc.v2\n" May 13 08:30:41.427695 kubelet[1588]: E0513 08:30:41.427437 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:42.108261 env[1258]: time="2025-05-13T08:30:42.108170627Z" level=info msg="CreateContainer within sandbox \"e55d322841bfa54a440b2dd7875fcf1a1b47a98afd1864b962ae5b3dbad44a17\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 08:30:42.146979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount414957863.mount: Deactivated successfully. May 13 08:30:42.169981 env[1258]: time="2025-05-13T08:30:42.169816734Z" level=info msg="CreateContainer within sandbox \"e55d322841bfa54a440b2dd7875fcf1a1b47a98afd1864b962ae5b3dbad44a17\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2789577d9312ff9847f4b3e9d781025121b3e4a39a60cb872b125edb8298a8ca\"" May 13 08:30:42.172688 env[1258]: time="2025-05-13T08:30:42.172431616Z" level=info msg="StartContainer for \"2789577d9312ff9847f4b3e9d781025121b3e4a39a60cb872b125edb8298a8ca\"" May 13 08:30:42.221080 kubelet[1588]: I0513 08:30:42.221002 1588 setters.go:580] "Node became not ready" node="172.24.4.231" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T08:30:42Z","lastTransitionTime":"2025-05-13T08:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 08:30:42.263079 env[1258]: time="2025-05-13T08:30:42.262986646Z" level=info msg="StartContainer for \"2789577d9312ff9847f4b3e9d781025121b3e4a39a60cb872b125edb8298a8ca\" returns successfully" May 13 08:30:42.290584 env[1258]: time="2025-05-13T08:30:42.290487843Z" level=info msg="shim disconnected" id=2789577d9312ff9847f4b3e9d781025121b3e4a39a60cb872b125edb8298a8ca May 13 08:30:42.290980 env[1258]: time="2025-05-13T08:30:42.290957590Z" level=warning msg="cleaning up after shim disconnected" id=2789577d9312ff9847f4b3e9d781025121b3e4a39a60cb872b125edb8298a8ca namespace=k8s.io May 13 08:30:42.291124 env[1258]: time="2025-05-13T08:30:42.291104273Z" level=info msg="cleaning up dead shim" May 13 08:30:42.301911 env[1258]: time="2025-05-13T08:30:42.301829188Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:30:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3552 runtime=io.containerd.runc.v2\n" May 13 08:30:42.429180 kubelet[1588]: E0513 08:30:42.428864 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:42.886054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2789577d9312ff9847f4b3e9d781025121b3e4a39a60cb872b125edb8298a8ca-rootfs.mount: Deactivated successfully. May 13 08:30:43.114548 env[1258]: time="2025-05-13T08:30:43.114433850Z" level=info msg="CreateContainer within sandbox \"e55d322841bfa54a440b2dd7875fcf1a1b47a98afd1864b962ae5b3dbad44a17\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 08:30:43.163916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4243993996.mount: Deactivated successfully. May 13 08:30:43.178959 env[1258]: time="2025-05-13T08:30:43.178843944Z" level=info msg="CreateContainer within sandbox \"e55d322841bfa54a440b2dd7875fcf1a1b47a98afd1864b962ae5b3dbad44a17\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"33b8167e230fb1b625d3b313abe4c4b105ebec3fb9142fe8fdfd2d69f718bced\"" May 13 08:30:43.181662 env[1258]: time="2025-05-13T08:30:43.180492702Z" level=info msg="StartContainer for \"33b8167e230fb1b625d3b313abe4c4b105ebec3fb9142fe8fdfd2d69f718bced\"" May 13 08:30:43.296115 env[1258]: time="2025-05-13T08:30:43.295962585Z" level=info msg="StartContainer for \"33b8167e230fb1b625d3b313abe4c4b105ebec3fb9142fe8fdfd2d69f718bced\" returns successfully" May 13 08:30:43.336318 env[1258]: time="2025-05-13T08:30:43.336200115Z" level=info msg="shim disconnected" id=33b8167e230fb1b625d3b313abe4c4b105ebec3fb9142fe8fdfd2d69f718bced May 13 08:30:43.336803 env[1258]: time="2025-05-13T08:30:43.336764299Z" level=warning msg="cleaning up after shim disconnected" id=33b8167e230fb1b625d3b313abe4c4b105ebec3fb9142fe8fdfd2d69f718bced namespace=k8s.io May 13 08:30:43.336919 env[1258]: time="2025-05-13T08:30:43.336900412Z" level=info msg="cleaning up dead shim" May 13 08:30:43.352152 env[1258]: time="2025-05-13T08:30:43.352087490Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:30:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3607 runtime=io.containerd.runc.v2\n" May 13 08:30:43.429352 kubelet[1588]: E0513 08:30:43.429210 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:43.886540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33b8167e230fb1b625d3b313abe4c4b105ebec3fb9142fe8fdfd2d69f718bced-rootfs.mount: Deactivated successfully. May 13 08:30:44.131778 env[1258]: time="2025-05-13T08:30:44.131645808Z" level=info msg="CreateContainer within sandbox \"e55d322841bfa54a440b2dd7875fcf1a1b47a98afd1864b962ae5b3dbad44a17\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 08:30:44.181398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990192625.mount: Deactivated successfully. May 13 08:30:44.185973 env[1258]: time="2025-05-13T08:30:44.185828212Z" level=info msg="CreateContainer within sandbox \"e55d322841bfa54a440b2dd7875fcf1a1b47a98afd1864b962ae5b3dbad44a17\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3f07b8ae93059879e82607834348c93136f5204b16e79138c10e18423a712009\"" May 13 08:30:44.188849 env[1258]: time="2025-05-13T08:30:44.188685558Z" level=info msg="StartContainer for \"3f07b8ae93059879e82607834348c93136f5204b16e79138c10e18423a712009\"" May 13 08:30:44.265118 env[1258]: time="2025-05-13T08:30:44.265066705Z" level=info msg="StartContainer for \"3f07b8ae93059879e82607834348c93136f5204b16e79138c10e18423a712009\" returns successfully" May 13 08:30:44.291931 env[1258]: time="2025-05-13T08:30:44.291851481Z" level=info msg="shim disconnected" id=3f07b8ae93059879e82607834348c93136f5204b16e79138c10e18423a712009 May 13 08:30:44.292383 env[1258]: time="2025-05-13T08:30:44.292359760Z" level=warning msg="cleaning up after shim disconnected" id=3f07b8ae93059879e82607834348c93136f5204b16e79138c10e18423a712009 namespace=k8s.io May 13 08:30:44.292518 env[1258]: time="2025-05-13T08:30:44.292498339Z" level=info msg="cleaning up dead shim" May 13 08:30:44.302607 env[1258]: time="2025-05-13T08:30:44.302473385Z" level=warning msg="cleanup warnings time=\"2025-05-13T08:30:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3660 runtime=io.containerd.runc.v2\n" May 13 08:30:44.430248 kubelet[1588]: E0513 08:30:44.430145 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:44.886580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f07b8ae93059879e82607834348c93136f5204b16e79138c10e18423a712009-rootfs.mount: Deactivated successfully. May 13 08:30:45.137807 env[1258]: time="2025-05-13T08:30:45.137382604Z" level=info msg="CreateContainer within sandbox \"e55d322841bfa54a440b2dd7875fcf1a1b47a98afd1864b962ae5b3dbad44a17\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 08:30:45.172965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3072845581.mount: Deactivated successfully. May 13 08:30:45.188728 env[1258]: time="2025-05-13T08:30:45.188657715Z" level=info msg="CreateContainer within sandbox \"e55d322841bfa54a440b2dd7875fcf1a1b47a98afd1864b962ae5b3dbad44a17\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"62868e346f6a89f4e6c48335c93db80b70a78e9c20b1a8d3ac64298ee3d56955\"" May 13 08:30:45.190397 env[1258]: time="2025-05-13T08:30:45.190305202Z" level=info msg="StartContainer for \"62868e346f6a89f4e6c48335c93db80b70a78e9c20b1a8d3ac64298ee3d56955\"" May 13 08:30:45.286645 env[1258]: time="2025-05-13T08:30:45.285187708Z" level=info msg="StartContainer for \"62868e346f6a89f4e6c48335c93db80b70a78e9c20b1a8d3ac64298ee3d56955\" returns successfully" May 13 08:30:45.431127 kubelet[1588]: E0513 08:30:45.430868 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:45.698674 kernel: cryptd: max_cpu_qlen set to 1000 May 13 08:30:45.755631 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) May 13 08:30:46.200250 kubelet[1588]: I0513 08:30:46.200090 1588 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-drtfb" podStartSLOduration=6.199934268 podStartE2EDuration="6.199934268s" podCreationTimestamp="2025-05-13 08:30:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 08:30:46.199669053 +0000 UTC m=+97.158822859" watchObservedRunningTime="2025-05-13 08:30:46.199934268 +0000 UTC m=+97.159088035" May 13 08:30:46.431730 kubelet[1588]: E0513 08:30:46.431408 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:47.178346 systemd[1]: run-containerd-runc-k8s.io-62868e346f6a89f4e6c48335c93db80b70a78e9c20b1a8d3ac64298ee3d56955-runc.aYuaNz.mount: Deactivated successfully. May 13 08:30:47.432951 kubelet[1588]: E0513 08:30:47.432665 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:48.433766 kubelet[1588]: E0513 08:30:48.433656 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:49.256291 systemd-networkd[1044]: lxc_health: Link UP May 13 08:30:49.268718 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 08:30:49.270249 systemd-networkd[1044]: lxc_health: Gained carrier May 13 08:30:49.393535 systemd[1]: run-containerd-runc-k8s.io-62868e346f6a89f4e6c48335c93db80b70a78e9c20b1a8d3ac64298ee3d56955-runc.Cz4Cdj.mount: Deactivated successfully. May 13 08:30:49.435968 kubelet[1588]: E0513 08:30:49.435900 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:50.332900 kubelet[1588]: E0513 08:30:50.332845 1588 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:50.436741 kubelet[1588]: E0513 08:30:50.436629 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:51.289960 systemd-networkd[1044]: lxc_health: Gained IPv6LL May 13 08:30:51.437629 kubelet[1588]: E0513 08:30:51.437428 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:51.747045 systemd[1]: run-containerd-runc-k8s.io-62868e346f6a89f4e6c48335c93db80b70a78e9c20b1a8d3ac64298ee3d56955-runc.4rCRCg.mount: Deactivated successfully. May 13 08:30:52.438907 kubelet[1588]: E0513 08:30:52.438744 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:53.440293 kubelet[1588]: E0513 08:30:53.440156 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:53.984400 systemd[1]: run-containerd-runc-k8s.io-62868e346f6a89f4e6c48335c93db80b70a78e9c20b1a8d3ac64298ee3d56955-runc.JihwdE.mount: Deactivated successfully. May 13 08:30:54.441777 kubelet[1588]: E0513 08:30:54.441690 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:55.443809 kubelet[1588]: E0513 08:30:55.443718 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:56.242753 systemd[1]: run-containerd-runc-k8s.io-62868e346f6a89f4e6c48335c93db80b70a78e9c20b1a8d3ac64298ee3d56955-runc.RLLyyd.mount: Deactivated successfully. May 13 08:30:56.445126 kubelet[1588]: E0513 08:30:56.445020 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:57.445930 kubelet[1588]: E0513 08:30:57.445806 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:58.447005 kubelet[1588]: E0513 08:30:58.446938 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:30:59.449018 kubelet[1588]: E0513 08:30:59.448812 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:00.450826 kubelet[1588]: E0513 08:31:00.450758 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:01.452438 kubelet[1588]: E0513 08:31:01.452369 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:02.454574 kubelet[1588]: E0513 08:31:02.454478 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:03.455655 kubelet[1588]: E0513 08:31:03.455565 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:04.457520 kubelet[1588]: E0513 08:31:04.457438 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:05.459081 kubelet[1588]: E0513 08:31:05.458936 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:06.460185 kubelet[1588]: E0513 08:31:06.460121 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:07.461352 kubelet[1588]: E0513 08:31:07.461291 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:08.463504 kubelet[1588]: E0513 08:31:08.463396 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:09.465299 kubelet[1588]: E0513 08:31:09.465155 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:10.333696 kubelet[1588]: E0513 08:31:10.333554 1588 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:10.400826 env[1258]: time="2025-05-13T08:31:10.400382927Z" level=info msg="StopPodSandbox for \"8bbbfffef080cee10db03ef40f0c72a0ef7e34074a244a9818e9571cdfc2f3e3\"" May 13 08:31:10.403279 env[1258]: time="2025-05-13T08:31:10.402673069Z" level=info msg="TearDown network for sandbox \"8bbbfffef080cee10db03ef40f0c72a0ef7e34074a244a9818e9571cdfc2f3e3\" successfully" May 13 08:31:10.403427 env[1258]: time="2025-05-13T08:31:10.403307104Z" level=info msg="StopPodSandbox for \"8bbbfffef080cee10db03ef40f0c72a0ef7e34074a244a9818e9571cdfc2f3e3\" returns successfully" May 13 08:31:10.407121 env[1258]: time="2025-05-13T08:31:10.406839027Z" level=info msg="RemovePodSandbox for \"8bbbfffef080cee10db03ef40f0c72a0ef7e34074a244a9818e9571cdfc2f3e3\"" May 13 08:31:10.407584 env[1258]: time="2025-05-13T08:31:10.407345765Z" level=info msg="Forcibly stopping sandbox \"8bbbfffef080cee10db03ef40f0c72a0ef7e34074a244a9818e9571cdfc2f3e3\"" May 13 08:31:10.408018 env[1258]: time="2025-05-13T08:31:10.407870516Z" level=info msg="TearDown network for sandbox \"8bbbfffef080cee10db03ef40f0c72a0ef7e34074a244a9818e9571cdfc2f3e3\" successfully" May 13 08:31:10.426249 env[1258]: time="2025-05-13T08:31:10.426154218Z" level=info msg="RemovePodSandbox \"8bbbfffef080cee10db03ef40f0c72a0ef7e34074a244a9818e9571cdfc2f3e3\" returns successfully" May 13 08:31:10.428234 env[1258]: time="2025-05-13T08:31:10.428116426Z" level=info msg="StopPodSandbox for \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\"" May 13 08:31:10.428655 env[1258]: time="2025-05-13T08:31:10.428413341Z" level=info msg="TearDown network for sandbox \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\" successfully" May 13 08:31:10.428809 env[1258]: time="2025-05-13T08:31:10.428662317Z" level=info msg="StopPodSandbox for \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\" returns successfully" May 13 08:31:10.429977 env[1258]: time="2025-05-13T08:31:10.429920940Z" level=info msg="RemovePodSandbox for \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\"" May 13 08:31:10.430382 env[1258]: time="2025-05-13T08:31:10.430280121Z" level=info msg="Forcibly stopping sandbox \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\"" May 13 08:31:10.430830 env[1258]: time="2025-05-13T08:31:10.430774716Z" level=info msg="TearDown network for sandbox \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\" successfully" May 13 08:31:10.438305 env[1258]: time="2025-05-13T08:31:10.438224033Z" level=info msg="RemovePodSandbox \"d421e3e6353682fb2d3a827429cc0abfee00cd0a61045a9277b3d64d0d7562a5\" returns successfully" May 13 08:31:10.466071 kubelet[1588]: E0513 08:31:10.465950 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:11.467051 kubelet[1588]: E0513 08:31:11.466902 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:12.467411 kubelet[1588]: E0513 08:31:12.467303 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:13.469464 kubelet[1588]: E0513 08:31:13.469379 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:14.471356 kubelet[1588]: E0513 08:31:14.471202 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:15.473212 kubelet[1588]: E0513 08:31:15.473094 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:16.474104 kubelet[1588]: E0513 08:31:16.473961 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:17.474738 kubelet[1588]: E0513 08:31:17.474641 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:18.475380 kubelet[1588]: E0513 08:31:18.475302 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:19.476695 kubelet[1588]: E0513 08:31:19.476567 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:20.477447 kubelet[1588]: E0513 08:31:20.477244 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:21.478743 kubelet[1588]: E0513 08:31:21.478577 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:22.479390 kubelet[1588]: E0513 08:31:22.479267 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:23.480490 kubelet[1588]: E0513 08:31:23.480423 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:24.482132 kubelet[1588]: E0513 08:31:24.481988 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:25.483105 kubelet[1588]: E0513 08:31:25.482981 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:26.484358 kubelet[1588]: E0513 08:31:26.484231 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:27.485642 kubelet[1588]: E0513 08:31:27.485450 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:28.486148 kubelet[1588]: E0513 08:31:28.486022 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:29.486785 kubelet[1588]: E0513 08:31:29.486648 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:30.333563 kubelet[1588]: E0513 08:31:30.333418 1588 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:30.487331 kubelet[1588]: E0513 08:31:30.487258 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:31.488808 kubelet[1588]: E0513 08:31:31.488732 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:32.490433 kubelet[1588]: E0513 08:31:32.490288 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:33.492153 kubelet[1588]: E0513 08:31:33.492092 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:34.493270 kubelet[1588]: E0513 08:31:34.493186 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:35.495662 kubelet[1588]: E0513 08:31:35.495474 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:36.496110 kubelet[1588]: E0513 08:31:36.496013 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:37.497406 kubelet[1588]: E0513 08:31:37.497328 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:38.498897 kubelet[1588]: E0513 08:31:38.498766 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:39.499072 kubelet[1588]: E0513 08:31:39.498993 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:40.500020 kubelet[1588]: E0513 08:31:40.499937 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:41.501541 kubelet[1588]: E0513 08:31:41.501461 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:42.502449 kubelet[1588]: E0513 08:31:42.502315 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:43.502776 kubelet[1588]: E0513 08:31:43.502695 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:44.504138 kubelet[1588]: E0513 08:31:44.504055 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:45.505749 kubelet[1588]: E0513 08:31:45.505584 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:46.508026 kubelet[1588]: E0513 08:31:46.507900 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:47.509232 kubelet[1588]: E0513 08:31:47.509155 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:48.510904 kubelet[1588]: E0513 08:31:48.510797 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:49.512013 kubelet[1588]: E0513 08:31:49.511938 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:50.333569 kubelet[1588]: E0513 08:31:50.333458 1588 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:50.513163 kubelet[1588]: E0513 08:31:50.513051 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:51.514642 kubelet[1588]: E0513 08:31:51.514451 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:52.515117 kubelet[1588]: E0513 08:31:52.514972 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:53.515372 kubelet[1588]: E0513 08:31:53.515255 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:54.515571 kubelet[1588]: E0513 08:31:54.515449 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:55.516184 kubelet[1588]: E0513 08:31:55.516091 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:56.517248 kubelet[1588]: E0513 08:31:56.517098 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:57.518077 kubelet[1588]: E0513 08:31:57.518004 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:58.519262 kubelet[1588]: E0513 08:31:58.519160 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 08:31:59.520987 kubelet[1588]: E0513 08:31:59.520468 1588 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"