May 8 06:46:05.968256 kernel: Linux version 5.15.180-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Wed May 7 23:10:51 -00 2025 May 8 06:46:05.968308 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=a54efb5fced97d6fa50818abcad373184ba88ccc0f58664d2cd82270befba488 May 8 06:46:05.968332 kernel: BIOS-provided physical RAM map: May 8 06:46:05.968356 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 8 06:46:05.968372 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 8 06:46:05.968389 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 8 06:46:05.968409 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 8 06:46:05.968426 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 8 06:46:05.968442 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 8 06:46:05.968458 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 8 06:46:05.968475 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 8 06:46:05.968491 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 8 06:46:05.968512 kernel: NX (Execute Disable) protection: active May 8 06:46:05.968528 kernel: SMBIOS 3.0.0 present. May 8 06:46:05.968549 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 8 06:46:05.968567 kernel: Hypervisor detected: KVM May 8 06:46:05.968584 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 06:46:05.968602 kernel: kvm-clock: cpu 0, msr b1198001, primary cpu clock May 8 06:46:05.968623 kernel: kvm-clock: using sched offset of 4072347274 cycles May 8 06:46:05.968643 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 06:46:05.968662 kernel: tsc: Detected 1996.249 MHz processor May 8 06:46:05.968681 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 06:46:05.968701 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 06:46:05.968719 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 8 06:46:05.968737 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 06:46:05.968755 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 8 06:46:05.968774 kernel: ACPI: Early table checksum verification disabled May 8 06:46:05.968795 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 8 06:46:05.968813 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 06:46:05.968832 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 06:46:05.968851 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 06:46:05.968868 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 8 06:46:05.968887 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 06:46:05.968905 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 06:46:05.968923 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 8 06:46:05.968944 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 8 06:46:05.968962 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 8 06:46:05.968980 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 8 06:46:05.968998 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 8 06:46:05.969016 kernel: No NUMA configuration found May 8 06:46:05.969041 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 8 06:46:05.969060 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] May 8 06:46:05.969082 kernel: Zone ranges: May 8 06:46:05.969135 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 06:46:05.969154 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 8 06:46:05.969173 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 8 06:46:05.969192 kernel: Movable zone start for each node May 8 06:46:05.969210 kernel: Early memory node ranges May 8 06:46:05.969229 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 8 06:46:05.969248 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 8 06:46:05.969272 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 8 06:46:05.969291 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 8 06:46:05.969309 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 06:46:05.969328 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 8 06:46:05.969347 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 8 06:46:05.969366 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 06:46:05.969385 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 06:46:05.969404 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 06:46:05.969423 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 06:46:05.969445 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 06:46:05.969464 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 06:46:05.969483 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 06:46:05.969502 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 06:46:05.969520 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 06:46:05.969539 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 8 06:46:05.969558 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 8 06:46:05.969577 kernel: Booting paravirtualized kernel on KVM May 8 06:46:05.969596 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 06:46:05.969619 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 8 06:46:05.969638 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 8 06:46:05.969657 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 8 06:46:05.969675 kernel: pcpu-alloc: [0] 0 1 May 8 06:46:05.969694 kernel: kvm-guest: stealtime: cpu 0, msr 13bc1c0c0 May 8 06:46:05.969712 kernel: kvm-guest: PV spinlocks disabled, no host support May 8 06:46:05.969731 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 8 06:46:05.969749 kernel: Policy zone: Normal May 8 06:46:05.969771 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=a54efb5fced97d6fa50818abcad373184ba88ccc0f58664d2cd82270befba488 May 8 06:46:05.969797 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 06:46:05.969816 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 06:46:05.969836 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 06:46:05.969855 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 06:46:05.969875 kernel: Memory: 3968276K/4193772K available (12294K kernel code, 2279K rwdata, 13724K rodata, 47464K init, 4116K bss, 225236K reserved, 0K cma-reserved) May 8 06:46:05.969893 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 8 06:46:05.969912 kernel: ftrace: allocating 34584 entries in 136 pages May 8 06:46:05.969931 kernel: ftrace: allocated 136 pages with 2 groups May 8 06:46:05.969954 kernel: rcu: Hierarchical RCU implementation. May 8 06:46:05.969974 kernel: rcu: RCU event tracing is enabled. May 8 06:46:05.969993 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 8 06:46:05.970012 kernel: Rude variant of Tasks RCU enabled. May 8 06:46:05.970031 kernel: Tracing variant of Tasks RCU enabled. May 8 06:46:05.970075 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 06:46:05.972140 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 8 06:46:05.972164 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 8 06:46:05.972179 kernel: Console: colour VGA+ 80x25 May 8 06:46:05.972201 kernel: printk: console [tty0] enabled May 8 06:46:05.972215 kernel: printk: console [ttyS0] enabled May 8 06:46:05.972229 kernel: ACPI: Core revision 20210730 May 8 06:46:05.972244 kernel: APIC: Switch to symmetric I/O mode setup May 8 06:46:05.972258 kernel: x2apic enabled May 8 06:46:05.972272 kernel: Switched APIC routing to physical x2apic. May 8 06:46:05.972286 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 06:46:05.972301 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 06:46:05.972315 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 8 06:46:05.972333 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 8 06:46:05.972347 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 8 06:46:05.972361 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 06:46:05.972376 kernel: Spectre V2 : Mitigation: Retpolines May 8 06:46:05.972390 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 06:46:05.972404 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 06:46:05.972418 kernel: Speculative Store Bypass: Vulnerable May 8 06:46:05.972432 kernel: x86/fpu: x87 FPU will use FXSAVE May 8 06:46:05.972446 kernel: Freeing SMP alternatives memory: 32K May 8 06:46:05.972463 kernel: pid_max: default: 32768 minimum: 301 May 8 06:46:05.972477 kernel: LSM: Security Framework initializing May 8 06:46:05.972491 kernel: SELinux: Initializing. May 8 06:46:05.972505 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 06:46:05.972519 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 06:46:05.972534 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 8 06:46:05.972559 kernel: Performance Events: AMD PMU driver. May 8 06:46:05.972578 kernel: ... version: 0 May 8 06:46:05.972592 kernel: ... bit width: 48 May 8 06:46:05.972607 kernel: ... generic registers: 4 May 8 06:46:05.972621 kernel: ... value mask: 0000ffffffffffff May 8 06:46:05.972636 kernel: ... max period: 00007fffffffffff May 8 06:46:05.972654 kernel: ... fixed-purpose events: 0 May 8 06:46:05.972669 kernel: ... event mask: 000000000000000f May 8 06:46:05.972683 kernel: signal: max sigframe size: 1440 May 8 06:46:05.972698 kernel: rcu: Hierarchical SRCU implementation. May 8 06:46:05.972713 kernel: smp: Bringing up secondary CPUs ... May 8 06:46:05.972730 kernel: x86: Booting SMP configuration: May 8 06:46:05.972744 kernel: .... node #0, CPUs: #1 May 8 06:46:05.972759 kernel: kvm-clock: cpu 1, msr b1198041, secondary cpu clock May 8 06:46:05.972773 kernel: kvm-guest: stealtime: cpu 1, msr 13bd1c0c0 May 8 06:46:05.972788 kernel: smp: Brought up 1 node, 2 CPUs May 8 06:46:05.972802 kernel: smpboot: Max logical packages: 2 May 8 06:46:05.972818 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 8 06:46:05.972832 kernel: devtmpfs: initialized May 8 06:46:05.972847 kernel: x86/mm: Memory block size: 128MB May 8 06:46:05.972864 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 06:46:05.972879 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 8 06:46:05.972894 kernel: pinctrl core: initialized pinctrl subsystem May 8 06:46:05.972909 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 06:46:05.972923 kernel: audit: initializing netlink subsys (disabled) May 8 06:46:05.972938 kernel: audit: type=2000 audit(1746686764.408:1): state=initialized audit_enabled=0 res=1 May 8 06:46:05.972953 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 06:46:05.972968 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 06:46:05.972982 kernel: cpuidle: using governor menu May 8 06:46:05.973000 kernel: ACPI: bus type PCI registered May 8 06:46:05.973015 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 06:46:05.973030 kernel: dca service started, version 1.12.1 May 8 06:46:05.973045 kernel: PCI: Using configuration type 1 for base access May 8 06:46:05.973060 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 06:46:05.973074 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 8 06:46:05.973110 kernel: ACPI: Added _OSI(Module Device) May 8 06:46:05.973126 kernel: ACPI: Added _OSI(Processor Device) May 8 06:46:05.973141 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 06:46:05.973162 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 06:46:05.973176 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 8 06:46:05.973190 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 8 06:46:05.973205 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 8 06:46:05.973220 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 06:46:05.973234 kernel: ACPI: Interpreter enabled May 8 06:46:05.973249 kernel: ACPI: PM: (supports S0 S3 S5) May 8 06:46:05.973263 kernel: ACPI: Using IOAPIC for interrupt routing May 8 06:46:05.973278 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 06:46:05.973296 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 8 06:46:05.973310 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 06:46:05.973532 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 8 06:46:05.973690 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. May 8 06:46:05.973713 kernel: acpiphp: Slot [3] registered May 8 06:46:05.973728 kernel: acpiphp: Slot [4] registered May 8 06:46:05.973743 kernel: acpiphp: Slot [5] registered May 8 06:46:05.973758 kernel: acpiphp: Slot [6] registered May 8 06:46:05.973778 kernel: acpiphp: Slot [7] registered May 8 06:46:05.973793 kernel: acpiphp: Slot [8] registered May 8 06:46:05.973808 kernel: acpiphp: Slot [9] registered May 8 06:46:05.973822 kernel: acpiphp: Slot [10] registered May 8 06:46:05.973837 kernel: acpiphp: Slot [11] registered May 8 06:46:05.973851 kernel: acpiphp: Slot [12] registered May 8 06:46:05.973866 kernel: acpiphp: Slot [13] registered May 8 06:46:05.973880 kernel: acpiphp: Slot [14] registered May 8 06:46:05.973895 kernel: acpiphp: Slot [15] registered May 8 06:46:05.973912 kernel: acpiphp: Slot [16] registered May 8 06:46:05.973927 kernel: acpiphp: Slot [17] registered May 8 06:46:05.973941 kernel: acpiphp: Slot [18] registered May 8 06:46:05.973955 kernel: acpiphp: Slot [19] registered May 8 06:46:05.973970 kernel: acpiphp: Slot [20] registered May 8 06:46:05.973984 kernel: acpiphp: Slot [21] registered May 8 06:46:05.973999 kernel: acpiphp: Slot [22] registered May 8 06:46:05.974013 kernel: acpiphp: Slot [23] registered May 8 06:46:05.974028 kernel: acpiphp: Slot [24] registered May 8 06:46:05.974062 kernel: acpiphp: Slot [25] registered May 8 06:46:05.974080 kernel: acpiphp: Slot [26] registered May 8 06:46:05.974118 kernel: acpiphp: Slot [27] registered May 8 06:46:05.974133 kernel: acpiphp: Slot [28] registered May 8 06:46:05.974147 kernel: acpiphp: Slot [29] registered May 8 06:46:05.974162 kernel: acpiphp: Slot [30] registered May 8 06:46:05.974176 kernel: acpiphp: Slot [31] registered May 8 06:46:05.974191 kernel: PCI host bridge to bus 0000:00 May 8 06:46:05.974350 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 06:46:05.974495 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 06:46:05.974631 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 06:46:05.974764 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 06:46:05.974895 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 8 06:46:05.975008 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 06:46:05.979137 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 8 06:46:05.979246 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 8 06:46:05.979351 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 8 06:46:05.979441 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 8 06:46:05.979529 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 8 06:46:05.979617 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 8 06:46:05.979705 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 8 06:46:05.979791 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 8 06:46:05.979891 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 8 06:46:05.979979 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 8 06:46:05.980060 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 8 06:46:05.980166 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 8 06:46:05.980252 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 8 06:46:05.980339 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 8 06:46:05.980423 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 8 06:46:05.980508 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 8 06:46:05.980590 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 06:46:05.980685 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 8 06:46:05.980767 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 8 06:46:05.980847 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 8 06:46:05.980928 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 8 06:46:05.981008 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 8 06:46:05.981116 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 8 06:46:05.981203 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 8 06:46:05.981286 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 8 06:46:05.981368 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 8 06:46:05.981455 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 8 06:46:05.981537 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 8 06:46:05.981617 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 8 06:46:05.981709 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 8 06:46:05.981792 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 8 06:46:05.981873 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 8 06:46:05.981955 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 8 06:46:05.981967 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 06:46:05.981975 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 06:46:05.981983 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 06:46:05.981995 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 06:46:05.982003 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 8 06:46:05.982011 kernel: iommu: Default domain type: Translated May 8 06:46:05.982019 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 06:46:05.982135 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 8 06:46:05.982225 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 06:46:05.982312 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 8 06:46:05.982326 kernel: vgaarb: loaded May 8 06:46:05.982334 kernel: pps_core: LinuxPPS API ver. 1 registered May 8 06:46:05.982346 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 8 06:46:05.982355 kernel: PTP clock support registered May 8 06:46:05.982364 kernel: PCI: Using ACPI for IRQ routing May 8 06:46:05.982372 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 06:46:05.982381 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 8 06:46:05.982389 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 8 06:46:05.982398 kernel: clocksource: Switched to clocksource kvm-clock May 8 06:46:05.982406 kernel: VFS: Disk quotas dquot_6.6.0 May 8 06:46:05.982415 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 06:46:05.982426 kernel: pnp: PnP ACPI init May 8 06:46:05.982513 kernel: pnp 00:03: [dma 2] May 8 06:46:05.982527 kernel: pnp: PnP ACPI: found 5 devices May 8 06:46:05.982536 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 06:46:05.982544 kernel: NET: Registered PF_INET protocol family May 8 06:46:05.982553 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 06:46:05.982562 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 06:46:05.982570 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 06:46:05.982582 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 06:46:05.982591 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 8 06:46:05.982599 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 06:46:05.982608 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 06:46:05.982616 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 06:46:05.982625 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 06:46:05.982634 kernel: NET: Registered PF_XDP protocol family May 8 06:46:05.982713 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 06:46:05.982799 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 06:46:05.982888 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 06:46:05.982968 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 8 06:46:05.983044 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 8 06:46:05.986228 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 8 06:46:05.986327 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 8 06:46:05.986417 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds May 8 06:46:05.986430 kernel: PCI: CLS 0 bytes, default 64 May 8 06:46:05.986440 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 8 06:46:05.986453 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 8 06:46:05.986462 kernel: Initialise system trusted keyrings May 8 06:46:05.986471 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 06:46:05.986480 kernel: Key type asymmetric registered May 8 06:46:05.986488 kernel: Asymmetric key parser 'x509' registered May 8 06:46:05.986497 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 8 06:46:05.986506 kernel: io scheduler mq-deadline registered May 8 06:46:05.986514 kernel: io scheduler kyber registered May 8 06:46:05.986523 kernel: io scheduler bfq registered May 8 06:46:05.986533 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 06:46:05.986542 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 8 06:46:05.986551 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 8 06:46:05.986560 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 8 06:46:05.986569 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 8 06:46:05.986578 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 06:46:05.986587 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 06:46:05.986596 kernel: random: crng init done May 8 06:46:05.986604 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 06:46:05.986615 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 06:46:05.986623 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 06:46:05.986632 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 06:46:05.986720 kernel: rtc_cmos 00:04: RTC can wake from S4 May 8 06:46:05.986801 kernel: rtc_cmos 00:04: registered as rtc0 May 8 06:46:05.986880 kernel: rtc_cmos 00:04: setting system clock to 2025-05-08T06:46:05 UTC (1746686765) May 8 06:46:05.986957 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 8 06:46:05.986970 kernel: NET: Registered PF_INET6 protocol family May 8 06:46:05.986982 kernel: Segment Routing with IPv6 May 8 06:46:05.986991 kernel: In-situ OAM (IOAM) with IPv6 May 8 06:46:05.986999 kernel: NET: Registered PF_PACKET protocol family May 8 06:46:05.987008 kernel: Key type dns_resolver registered May 8 06:46:05.987016 kernel: IPI shorthand broadcast: enabled May 8 06:46:05.987025 kernel: sched_clock: Marking stable (840530115, 166705864)->(1071542939, -64306960) May 8 06:46:05.987033 kernel: registered taskstats version 1 May 8 06:46:05.987042 kernel: Loading compiled-in X.509 certificates May 8 06:46:05.987051 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.180-flatcar: c9ff13353458e6fa2786638fdd3dcad841d1075c' May 8 06:46:05.987061 kernel: Key type .fscrypt registered May 8 06:46:05.987069 kernel: Key type fscrypt-provisioning registered May 8 06:46:05.987078 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 06:46:05.987086 kernel: ima: Allocated hash algorithm: sha1 May 8 06:46:05.991123 kernel: ima: No architecture policies found May 8 06:46:05.991132 kernel: clk: Disabling unused clocks May 8 06:46:05.991141 kernel: Freeing unused kernel image (initmem) memory: 47464K May 8 06:46:05.991149 kernel: Write protecting the kernel read-only data: 28672k May 8 06:46:05.991161 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 8 06:46:05.991169 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 8 06:46:05.991177 kernel: Run /init as init process May 8 06:46:05.991185 kernel: with arguments: May 8 06:46:05.991193 kernel: /init May 8 06:46:05.991201 kernel: with environment: May 8 06:46:05.991208 kernel: HOME=/ May 8 06:46:05.991216 kernel: TERM=linux May 8 06:46:05.991224 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 06:46:05.991236 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 8 06:46:05.991248 systemd[1]: Detected virtualization kvm. May 8 06:46:05.991257 systemd[1]: Detected architecture x86-64. May 8 06:46:05.991266 systemd[1]: Running in initrd. May 8 06:46:05.991275 systemd[1]: No hostname configured, using default hostname. May 8 06:46:05.991283 systemd[1]: Hostname set to . May 8 06:46:05.991292 systemd[1]: Initializing machine ID from VM UUID. May 8 06:46:05.991302 systemd[1]: Queued start job for default target initrd.target. May 8 06:46:05.991310 systemd[1]: Started systemd-ask-password-console.path. May 8 06:46:05.991319 systemd[1]: Reached target cryptsetup.target. May 8 06:46:05.991327 systemd[1]: Reached target paths.target. May 8 06:46:05.991336 systemd[1]: Reached target slices.target. May 8 06:46:05.991344 systemd[1]: Reached target swap.target. May 8 06:46:05.991352 systemd[1]: Reached target timers.target. May 8 06:46:05.991361 systemd[1]: Listening on iscsid.socket. May 8 06:46:05.991372 systemd[1]: Listening on iscsiuio.socket. May 8 06:46:05.991388 systemd[1]: Listening on systemd-journald-audit.socket. May 8 06:46:05.991398 systemd[1]: Listening on systemd-journald-dev-log.socket. May 8 06:46:05.991407 systemd[1]: Listening on systemd-journald.socket. May 8 06:46:05.991416 systemd[1]: Listening on systemd-networkd.socket. May 8 06:46:05.991424 systemd[1]: Listening on systemd-udevd-control.socket. May 8 06:46:05.991435 systemd[1]: Listening on systemd-udevd-kernel.socket. May 8 06:46:05.991444 systemd[1]: Reached target sockets.target. May 8 06:46:05.991452 systemd[1]: Starting kmod-static-nodes.service... May 8 06:46:05.991461 systemd[1]: Finished network-cleanup.service. May 8 06:46:05.991470 systemd[1]: Starting systemd-fsck-usr.service... May 8 06:46:05.991479 systemd[1]: Starting systemd-journald.service... May 8 06:46:05.991488 systemd[1]: Starting systemd-modules-load.service... May 8 06:46:05.991497 systemd[1]: Starting systemd-resolved.service... May 8 06:46:05.991505 systemd[1]: Starting systemd-vconsole-setup.service... May 8 06:46:05.991515 systemd[1]: Finished kmod-static-nodes.service. May 8 06:46:05.991528 systemd-journald[185]: Journal started May 8 06:46:05.991573 systemd-journald[185]: Runtime Journal (/run/log/journal/3248c86ae54b404fb80f4dc97f80dac5) is 8.0M, max 78.4M, 70.4M free. May 8 06:46:05.948603 systemd-modules-load[186]: Inserted module 'overlay' May 8 06:46:06.018423 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 06:46:06.018447 systemd[1]: Started systemd-journald.service. May 8 06:46:06.018463 kernel: audit: type=1130 audit(1746686766.010:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:06.018476 kernel: Bridge firewalling registered May 8 06:46:06.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:05.995577 systemd-resolved[187]: Positive Trust Anchors: May 8 06:46:06.023747 kernel: audit: type=1130 audit(1746686766.018:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:06.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:05.995586 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 06:46:06.029503 kernel: audit: type=1130 audit(1746686766.024:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:06.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:05.995624 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 8 06:46:06.036726 kernel: audit: type=1130 audit(1746686766.029:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:06.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:05.998583 systemd-resolved[187]: Defaulting to hostname 'linux'. May 8 06:46:06.042165 kernel: audit: type=1130 audit(1746686766.037:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:06.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:06.017085 systemd-modules-load[186]: Inserted module 'br_netfilter' May 8 06:46:06.018981 systemd[1]: Started systemd-resolved.service. May 8 06:46:06.024396 systemd[1]: Finished systemd-fsck-usr.service. May 8 06:46:06.030205 systemd[1]: Finished systemd-vconsole-setup.service. May 8 06:46:06.037359 systemd[1]: Reached target nss-lookup.target. May 8 06:46:06.043324 systemd[1]: Starting dracut-cmdline-ask.service... May 8 06:46:06.044402 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 8 06:46:06.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:06.050614 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 8 06:46:06.059891 kernel: audit: type=1130 audit(1746686766.051:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:06.064588 systemd[1]: Finished dracut-cmdline-ask.service. May 8 06:46:06.065565 kernel: SCSI subsystem initialized May 8 06:46:06.065821 systemd[1]: Starting dracut-cmdline.service... May 8 06:46:06.071353 kernel: audit: type=1130 audit(1746686766.065:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:06.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:06.074929 dracut-cmdline[202]: dracut-dracut-053 May 8 06:46:06.076537 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=a54efb5fced97d6fa50818abcad373184ba88ccc0f58664d2cd82270befba488 May 8 06:46:06.088954 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 06:46:06.088985 kernel: device-mapper: uevent: version 1.0.3 May 8 06:46:06.091542 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 8 06:46:06.094838 systemd-modules-load[186]: Inserted module 'dm_multipath' May 8 06:46:06.095505 systemd[1]: Finished systemd-modules-load.service. May 8 06:46:06.097068 systemd[1]: Starting systemd-sysctl.service... May 8 06:46:06.102485 kernel: audit: type=1130 audit(1746686766.096:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:06.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:06.108374 systemd[1]: Finished systemd-sysctl.service. May 8 06:46:06.113744 kernel: audit: type=1130 audit(1746686766.108:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:06.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:06.139111 kernel: Loading iSCSI transport class v2.0-870. May 8 06:46:06.159110 kernel: iscsi: registered transport (tcp) May 8 06:46:06.185906 kernel: iscsi: registered transport (qla4xxx) May 8 06:46:06.185960 kernel: QLogic iSCSI HBA Driver May 8 06:46:06.233503 systemd[1]: Finished dracut-cmdline.service. May 8 06:46:06.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:06.236585 systemd[1]: Starting dracut-pre-udev.service... May 8 06:46:06.304205 kernel: raid6: sse2x4 gen() 9127 MB/s May 8 06:46:06.322193 kernel: raid6: sse2x4 xor() 7261 MB/s May 8 06:46:06.340196 kernel: raid6: sse2x2 gen() 14689 MB/s May 8 06:46:06.358349 kernel: raid6: sse2x2 xor() 8715 MB/s May 8 06:46:06.379195 kernel: raid6: sse2x1 gen() 11258 MB/s May 8 06:46:06.397592 kernel: raid6: sse2x1 xor() 6949 MB/s May 8 06:46:06.397652 kernel: raid6: using algorithm sse2x2 gen() 14689 MB/s May 8 06:46:06.397680 kernel: raid6: .... xor() 8715 MB/s, rmw enabled May 8 06:46:06.398823 kernel: raid6: using ssse3x2 recovery algorithm May 8 06:46:06.414837 kernel: xor: measuring software checksum speed May 8 06:46:06.414899 kernel: prefetch64-sse : 18355 MB/sec May 8 06:46:06.416079 kernel: generic_sse : 16746 MB/sec May 8 06:46:06.416174 kernel: xor: using function: prefetch64-sse (18355 MB/sec) May 8 06:46:06.531162 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 8 06:46:06.545870 systemd[1]: Finished dracut-pre-udev.service. May 8 06:46:06.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:06.547000 audit: BPF prog-id=7 op=LOAD May 8 06:46:06.548000 audit: BPF prog-id=8 op=LOAD May 8 06:46:06.549010 systemd[1]: Starting systemd-udevd.service... May 8 06:46:06.561854 systemd-udevd[384]: Using default interface naming scheme 'v252'. May 8 06:46:06.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:06.566534 systemd[1]: Started systemd-udevd.service. May 8 06:46:06.574355 systemd[1]: Starting dracut-pre-trigger.service... May 8 06:46:06.595165 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation May 8 06:46:06.645128 systemd[1]: Finished dracut-pre-trigger.service. May 8 06:46:06.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:06.648435 systemd[1]: Starting systemd-udev-trigger.service... May 8 06:46:06.685798 systemd[1]: Finished systemd-udev-trigger.service. May 8 06:46:06.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:06.762145 kernel: libata version 3.00 loaded. May 8 06:46:06.765210 kernel: ata_piix 0000:00:01.1: version 2.13 May 8 06:46:06.787350 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 8 06:46:06.791840 kernel: scsi host0: ata_piix May 8 06:46:06.791962 kernel: scsi host1: ata_piix May 8 06:46:06.792079 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 8 06:46:06.792109 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 8 06:46:06.792122 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 06:46:06.792133 kernel: GPT:17805311 != 20971519 May 8 06:46:06.792143 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 06:46:06.792154 kernel: GPT:17805311 != 20971519 May 8 06:46:06.792166 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 06:46:06.792177 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 06:46:06.988153 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (438) May 8 06:46:07.008782 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 8 06:46:07.010262 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 8 06:46:07.029000 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 8 06:46:07.034069 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 8 06:46:07.038360 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 8 06:46:07.039782 systemd[1]: Starting disk-uuid.service... May 8 06:46:07.060283 disk-uuid[471]: Primary Header is updated. May 8 06:46:07.060283 disk-uuid[471]: Secondary Entries is updated. May 8 06:46:07.060283 disk-uuid[471]: Secondary Header is updated. May 8 06:46:07.068169 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 06:46:07.075139 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 06:46:08.093047 disk-uuid[472]: The operation has completed successfully. May 8 06:46:08.094614 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 06:46:08.159609 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 06:46:08.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:08.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:08.159812 systemd[1]: Finished disk-uuid.service. May 8 06:46:08.180186 systemd[1]: Starting verity-setup.service... May 8 06:46:08.205178 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 8 06:46:08.279994 systemd[1]: Found device dev-mapper-usr.device. May 8 06:46:08.281301 systemd[1]: Mounting sysusr-usr.mount... May 8 06:46:08.282356 systemd[1]: Finished verity-setup.service. May 8 06:46:08.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:08.383165 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 8 06:46:08.383834 systemd[1]: Mounted sysusr-usr.mount. May 8 06:46:08.384490 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 8 06:46:08.385198 systemd[1]: Starting ignition-setup.service... May 8 06:46:08.389297 systemd[1]: Starting parse-ip-for-networkd.service... May 8 06:46:08.418293 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 06:46:08.418361 kernel: BTRFS info (device vda6): using free space tree May 8 06:46:08.418375 kernel: BTRFS info (device vda6): has skinny extents May 8 06:46:08.441867 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 06:46:08.462735 systemd[1]: Finished ignition-setup.service. May 8 06:46:08.464335 systemd[1]: Starting ignition-fetch-offline.service... May 8 06:46:08.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:08.492590 systemd[1]: Finished parse-ip-for-networkd.service. May 8 06:46:08.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:08.494000 audit: BPF prog-id=9 op=LOAD May 8 06:46:08.495130 systemd[1]: Starting systemd-networkd.service... May 8 06:46:08.526518 systemd-networkd[642]: lo: Link UP May 8 06:46:08.526530 systemd-networkd[642]: lo: Gained carrier May 8 06:46:08.527360 systemd-networkd[642]: Enumeration completed May 8 06:46:08.527904 systemd-networkd[642]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 06:46:08.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:08.530324 systemd[1]: Started systemd-networkd.service. May 8 06:46:08.531014 systemd-networkd[642]: eth0: Link UP May 8 06:46:08.531023 systemd-networkd[642]: eth0: Gained carrier May 8 06:46:08.532062 systemd[1]: Reached target network.target. May 8 06:46:08.534635 systemd[1]: Starting iscsiuio.service... May 8 06:46:08.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:08.541998 systemd[1]: Started iscsiuio.service. May 8 06:46:08.543649 systemd[1]: Starting iscsid.service... May 8 06:46:08.547567 iscsid[647]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 8 06:46:08.547567 iscsid[647]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 8 06:46:08.547567 iscsid[647]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 8 06:46:08.547567 iscsid[647]: If using hardware iscsi like qla4xxx this message can be ignored. May 8 06:46:08.547567 iscsid[647]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 8 06:46:08.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:08.554305 iscsid[647]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 8 06:46:08.549418 systemd[1]: Started iscsid.service. May 8 06:46:08.554691 systemd-networkd[642]: eth0: DHCPv4 address 172.24.4.62/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 8 06:46:08.556334 systemd[1]: Starting dracut-initqueue.service... May 8 06:46:08.570741 systemd[1]: Finished dracut-initqueue.service. May 8 06:46:08.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:08.571442 systemd[1]: Reached target remote-fs-pre.target. May 8 06:46:08.572766 systemd[1]: Reached target remote-cryptsetup.target. May 8 06:46:08.574358 systemd[1]: Reached target remote-fs.target. May 8 06:46:08.576929 systemd[1]: Starting dracut-pre-mount.service... May 8 06:46:08.588964 systemd[1]: Finished dracut-pre-mount.service. May 8 06:46:08.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:08.778231 ignition[611]: Ignition 2.14.0 May 8 06:46:08.780165 ignition[611]: Stage: fetch-offline May 8 06:46:08.781539 ignition[611]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 8 06:46:08.782298 ignition[611]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 8 06:46:08.784702 ignition[611]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 06:46:08.784994 ignition[611]: parsed url from cmdline: "" May 8 06:46:08.785005 ignition[611]: no config URL provided May 8 06:46:08.787336 systemd[1]: Finished ignition-fetch-offline.service. May 8 06:46:08.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:08.785018 ignition[611]: reading system config file "/usr/lib/ignition/user.ign" May 8 06:46:08.785039 ignition[611]: no config at "/usr/lib/ignition/user.ign" May 8 06:46:08.791055 systemd[1]: Starting ignition-fetch.service... May 8 06:46:08.785062 ignition[611]: failed to fetch config: resource requires networking May 8 06:46:08.785575 ignition[611]: Ignition finished successfully May 8 06:46:08.810133 ignition[665]: Ignition 2.14.0 May 8 06:46:08.810162 ignition[665]: Stage: fetch May 8 06:46:08.810397 ignition[665]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 8 06:46:08.810442 ignition[665]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 8 06:46:08.812665 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 06:46:08.812898 ignition[665]: parsed url from cmdline: "" May 8 06:46:08.812908 ignition[665]: no config URL provided May 8 06:46:08.812920 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" May 8 06:46:08.812939 ignition[665]: no config at "/usr/lib/ignition/user.ign" May 8 06:46:08.820848 ignition[665]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 8 06:46:08.820891 ignition[665]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 8 06:46:08.822679 ignition[665]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 8 06:46:09.074014 ignition[665]: GET result: OK May 8 06:46:09.074200 ignition[665]: parsing config with SHA512: ff82a4abb0278e15313cb4fbc156797c105ce161ad9ce89730178a83e5765ffcc4d4893092a7593bcfd9924ecac833d3aa9b0e9e385ac8ac1fe537a69afe6254 May 8 06:46:09.089584 unknown[665]: fetched base config from "system" May 8 06:46:09.089610 unknown[665]: fetched base config from "system" May 8 06:46:09.090442 ignition[665]: fetch: fetch complete May 8 06:46:09.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:09.089623 unknown[665]: fetched user config from "openstack" May 8 06:46:09.090454 ignition[665]: fetch: fetch passed May 8 06:46:09.092843 systemd[1]: Finished ignition-fetch.service. May 8 06:46:09.090531 ignition[665]: Ignition finished successfully May 8 06:46:09.103933 systemd[1]: Starting ignition-kargs.service... May 8 06:46:09.122204 ignition[671]: Ignition 2.14.0 May 8 06:46:09.122227 ignition[671]: Stage: kargs May 8 06:46:09.122463 ignition[671]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 8 06:46:09.122503 ignition[671]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 8 06:46:09.124659 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 06:46:09.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:09.129276 systemd[1]: Finished ignition-kargs.service. May 8 06:46:09.126726 ignition[671]: kargs: kargs passed May 8 06:46:09.126839 ignition[671]: Ignition finished successfully May 8 06:46:09.133436 systemd[1]: Starting ignition-disks.service... May 8 06:46:09.156125 ignition[677]: Ignition 2.14.0 May 8 06:46:09.156156 ignition[677]: Stage: disks May 8 06:46:09.156418 ignition[677]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 8 06:46:09.156463 ignition[677]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 8 06:46:09.158747 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 06:46:09.161183 ignition[677]: disks: disks passed May 8 06:46:09.163176 systemd[1]: Finished ignition-disks.service. May 8 06:46:09.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:09.161285 ignition[677]: Ignition finished successfully May 8 06:46:09.165652 systemd[1]: Reached target initrd-root-device.target. May 8 06:46:09.167955 systemd[1]: Reached target local-fs-pre.target. May 8 06:46:09.170393 systemd[1]: Reached target local-fs.target. May 8 06:46:09.172742 systemd[1]: Reached target sysinit.target. May 8 06:46:09.175161 systemd[1]: Reached target basic.target. May 8 06:46:09.179270 systemd[1]: Starting systemd-fsck-root.service... May 8 06:46:09.211181 systemd-fsck[684]: ROOT: clean, 623/1628000 files, 124060/1617920 blocks May 8 06:46:09.230949 systemd[1]: Finished systemd-fsck-root.service. May 8 06:46:09.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:09.234144 systemd[1]: Mounting sysroot.mount... May 8 06:46:09.260417 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 8 06:46:09.259978 systemd[1]: Mounted sysroot.mount. May 8 06:46:09.262504 systemd[1]: Reached target initrd-root-fs.target. May 8 06:46:09.266575 systemd[1]: Mounting sysroot-usr.mount... May 8 06:46:09.270754 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 8 06:46:09.273060 systemd[1]: Starting flatcar-openstack-hostname.service... May 8 06:46:09.274447 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 06:46:09.274509 systemd[1]: Reached target ignition-diskful.target. May 8 06:46:09.277784 systemd[1]: Mounted sysroot-usr.mount. May 8 06:46:09.280772 systemd[1]: Starting initrd-setup-root.service... May 8 06:46:09.293821 initrd-setup-root[695]: cut: /sysroot/etc/passwd: No such file or directory May 8 06:46:09.306305 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 8 06:46:09.317820 initrd-setup-root[704]: cut: /sysroot/etc/group: No such file or directory May 8 06:46:09.330180 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (702) May 8 06:46:09.335211 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 06:46:09.335285 kernel: BTRFS info (device vda6): using free space tree May 8 06:46:09.335319 kernel: BTRFS info (device vda6): has skinny extents May 8 06:46:09.338509 initrd-setup-root[715]: cut: /sysroot/etc/shadow: No such file or directory May 8 06:46:09.347014 initrd-setup-root[736]: cut: /sysroot/etc/gshadow: No such file or directory May 8 06:46:09.355202 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 8 06:46:09.438994 systemd[1]: Finished initrd-setup-root.service. May 8 06:46:09.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:09.443005 systemd[1]: Starting ignition-mount.service... May 8 06:46:09.451995 systemd[1]: Starting sysroot-boot.service... May 8 06:46:09.463740 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 8 06:46:09.463971 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 8 06:46:09.506736 ignition[759]: INFO : Ignition 2.14.0 May 8 06:46:09.507650 ignition[759]: INFO : Stage: mount May 8 06:46:09.508338 ignition[759]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 8 06:46:09.509126 ignition[759]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 8 06:46:09.511305 ignition[759]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 06:46:09.512994 ignition[759]: INFO : mount: mount passed May 8 06:46:09.513600 ignition[759]: INFO : Ignition finished successfully May 8 06:46:09.515217 systemd[1]: Finished ignition-mount.service. May 8 06:46:09.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:09.518084 systemd[1]: Finished sysroot-boot.service. May 8 06:46:09.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:09.524683 coreos-metadata[690]: May 08 06:46:09.524 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 8 06:46:09.539965 coreos-metadata[690]: May 08 06:46:09.539 INFO Fetch successful May 8 06:46:09.540731 coreos-metadata[690]: May 08 06:46:09.540 INFO wrote hostname ci-3510-3-7-n-500342624e.novalocal to /sysroot/etc/hostname May 8 06:46:09.543952 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 8 06:46:09.544050 systemd[1]: Finished flatcar-openstack-hostname.service. May 8 06:46:09.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:09.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:09.546223 systemd[1]: Starting ignition-files.service... May 8 06:46:09.553603 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 8 06:46:09.565161 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (767) May 8 06:46:09.570359 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 06:46:09.570385 kernel: BTRFS info (device vda6): using free space tree May 8 06:46:09.570397 kernel: BTRFS info (device vda6): has skinny extents May 8 06:46:09.581631 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 8 06:46:09.594659 ignition[786]: INFO : Ignition 2.14.0 May 8 06:46:09.594659 ignition[786]: INFO : Stage: files May 8 06:46:09.596717 ignition[786]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 8 06:46:09.596717 ignition[786]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 8 06:46:09.596717 ignition[786]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 06:46:09.601533 ignition[786]: DEBUG : files: compiled without relabeling support, skipping May 8 06:46:09.601533 ignition[786]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 06:46:09.601533 ignition[786]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 06:46:09.606512 ignition[786]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 06:46:09.606512 ignition[786]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 06:46:09.606512 ignition[786]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 06:46:09.606496 unknown[786]: wrote ssh authorized keys file for user: core May 8 06:46:09.612148 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 06:46:09.612148 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 8 06:46:09.612148 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 8 06:46:09.612148 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 8 06:46:09.612148 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 06:46:09.612148 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 06:46:09.612148 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 06:46:09.612148 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 06:46:09.612148 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 06:46:09.612148 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 8 06:46:10.078374 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK May 8 06:46:10.090313 systemd-networkd[642]: eth0: Gained IPv6LL May 8 06:46:11.751941 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 8 06:46:11.751941 ignition[786]: INFO : files: op(8): [started] processing unit "coreos-metadata-sshkeys@.service" May 8 06:46:11.751941 ignition[786]: INFO : files: op(8): [finished] processing unit "coreos-metadata-sshkeys@.service" May 8 06:46:11.751941 ignition[786]: INFO : files: op(9): [started] processing unit "containerd.service" May 8 06:46:11.768138 ignition[786]: INFO : files: op(9): op(a): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 06:46:11.768138 ignition[786]: INFO : files: op(9): op(a): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 8 06:46:11.768138 ignition[786]: INFO : files: op(9): [finished] processing unit "containerd.service" May 8 06:46:11.768138 ignition[786]: INFO : files: op(b): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 8 06:46:11.768138 ignition[786]: INFO : files: op(b): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 8 06:46:11.768138 ignition[786]: INFO : files: createResultFile: createFiles: op(c): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 06:46:11.768138 ignition[786]: INFO : files: createResultFile: createFiles: op(c): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 06:46:11.768138 ignition[786]: INFO : files: files passed May 8 06:46:11.768138 ignition[786]: INFO : Ignition finished successfully May 8 06:46:11.795893 kernel: kauditd_printk_skb: 28 callbacks suppressed May 8 06:46:11.795918 kernel: audit: type=1130 audit(1746686771.772:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.768589 systemd[1]: Finished ignition-files.service. May 8 06:46:11.773166 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 8 06:46:11.804737 kernel: audit: type=1130 audit(1746686771.798:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.788934 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 8 06:46:11.816159 kernel: audit: type=1130 audit(1746686771.805:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.816184 kernel: audit: type=1131 audit(1746686771.805:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.816344 initrd-setup-root-after-ignition[809]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 06:46:11.789739 systemd[1]: Starting ignition-quench.service... May 8 06:46:11.797669 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 8 06:46:11.799400 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 06:46:11.799562 systemd[1]: Finished ignition-quench.service. May 8 06:46:11.806278 systemd[1]: Reached target ignition-complete.target. May 8 06:46:11.818769 systemd[1]: Starting initrd-parse-etc.service... May 8 06:46:11.841280 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 06:46:11.842939 systemd[1]: Finished initrd-parse-etc.service. May 8 06:46:11.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.844864 systemd[1]: Reached target initrd-fs.target. May 8 06:46:11.861632 kernel: audit: type=1130 audit(1746686771.844:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.861677 kernel: audit: type=1131 audit(1746686771.844:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.860454 systemd[1]: Reached target initrd.target. May 8 06:46:11.862164 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 8 06:46:11.863002 systemd[1]: Starting dracut-pre-pivot.service... May 8 06:46:11.877630 systemd[1]: Finished dracut-pre-pivot.service. May 8 06:46:11.891265 kernel: audit: type=1130 audit(1746686771.878:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.880322 systemd[1]: Starting initrd-cleanup.service... May 8 06:46:11.911051 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 06:46:11.911243 systemd[1]: Finished initrd-cleanup.service. May 8 06:46:11.935085 kernel: audit: type=1130 audit(1746686771.913:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.935175 kernel: audit: type=1131 audit(1746686771.913:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.913840 systemd[1]: Stopped target nss-lookup.target. May 8 06:46:11.935546 systemd[1]: Stopped target remote-cryptsetup.target. May 8 06:46:11.937202 systemd[1]: Stopped target timers.target. May 8 06:46:11.938894 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 06:46:11.946681 kernel: audit: type=1131 audit(1746686771.940:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.938944 systemd[1]: Stopped dracut-pre-pivot.service. May 8 06:46:11.940484 systemd[1]: Stopped target initrd.target. May 8 06:46:11.947127 systemd[1]: Stopped target basic.target. May 8 06:46:11.948030 systemd[1]: Stopped target ignition-complete.target. May 8 06:46:11.949049 systemd[1]: Stopped target ignition-diskful.target. May 8 06:46:11.950062 systemd[1]: Stopped target initrd-root-device.target. May 8 06:46:11.951124 systemd[1]: Stopped target remote-fs.target. May 8 06:46:11.952120 systemd[1]: Stopped target remote-fs-pre.target. May 8 06:46:11.953208 systemd[1]: Stopped target sysinit.target. May 8 06:46:11.954200 systemd[1]: Stopped target local-fs.target. May 8 06:46:11.955185 systemd[1]: Stopped target local-fs-pre.target. May 8 06:46:11.956128 systemd[1]: Stopped target swap.target. May 8 06:46:11.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.956998 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 06:46:11.957050 systemd[1]: Stopped dracut-pre-mount.service. May 8 06:46:11.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.957969 systemd[1]: Stopped target cryptsetup.target. May 8 06:46:11.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.958937 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 06:46:11.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.958982 systemd[1]: Stopped dracut-initqueue.service. May 8 06:46:11.960157 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 06:46:11.960207 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 8 06:46:11.961190 systemd[1]: ignition-files.service: Deactivated successfully. May 8 06:46:11.961238 systemd[1]: Stopped ignition-files.service. May 8 06:46:11.962898 systemd[1]: Stopping ignition-mount.service... May 8 06:46:11.968865 iscsid[647]: iscsid shutting down. May 8 06:46:11.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.974149 systemd[1]: Stopping iscsid.service... May 8 06:46:11.975240 systemd[1]: Stopping sysroot-boot.service... May 8 06:46:11.975712 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 06:46:11.975765 systemd[1]: Stopped systemd-udev-trigger.service. May 8 06:46:11.976370 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 06:46:11.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.976410 systemd[1]: Stopped dracut-pre-trigger.service. May 8 06:46:11.977212 systemd[1]: iscsid.service: Deactivated successfully. May 8 06:46:11.977310 systemd[1]: Stopped iscsid.service. May 8 06:46:11.978541 systemd[1]: Stopping iscsiuio.service... May 8 06:46:11.981537 systemd[1]: iscsiuio.service: Deactivated successfully. May 8 06:46:11.981624 systemd[1]: Stopped iscsiuio.service. May 8 06:46:11.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.999349 ignition[824]: INFO : Ignition 2.14.0 May 8 06:46:11.999349 ignition[824]: INFO : Stage: umount May 8 06:46:11.999349 ignition[824]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 8 06:46:11.999349 ignition[824]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 8 06:46:11.999349 ignition[824]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 8 06:46:11.999349 ignition[824]: INFO : umount: umount passed May 8 06:46:11.999349 ignition[824]: INFO : Ignition finished successfully May 8 06:46:11.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:12.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.995680 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 06:46:11.995779 systemd[1]: Stopped ignition-mount.service. May 8 06:46:11.996437 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 06:46:11.996478 systemd[1]: Stopped ignition-disks.service. May 8 06:46:11.996948 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 06:46:11.996983 systemd[1]: Stopped ignition-kargs.service. May 8 06:46:12.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:11.997536 systemd[1]: ignition-fetch.service: Deactivated successfully. May 8 06:46:11.997573 systemd[1]: Stopped ignition-fetch.service. May 8 06:46:11.998084 systemd[1]: Stopped target network.target. May 8 06:46:11.998648 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 06:46:11.998693 systemd[1]: Stopped ignition-fetch-offline.service. May 8 06:46:12.000106 systemd[1]: Stopped target paths.target. May 8 06:46:12.001158 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 06:46:12.008230 systemd[1]: Stopped systemd-ask-password-console.path. May 8 06:46:12.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:12.008773 systemd[1]: Stopped target slices.target. May 8 06:46:12.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:12.009197 systemd[1]: Stopped target sockets.target. May 8 06:46:12.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:12.009813 systemd[1]: iscsid.socket: Deactivated successfully. May 8 06:46:12.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:12.009854 systemd[1]: Closed iscsid.socket. May 8 06:46:12.010337 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 06:46:12.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:12.010369 systemd[1]: Closed iscsiuio.socket. May 8 06:46:12.044000 audit: BPF prog-id=6 op=UNLOAD May 8 06:46:12.010814 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 06:46:12.010857 systemd[1]: Stopped ignition-setup.service. May 8 06:46:12.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:12.011871 systemd[1]: Stopping systemd-networkd.service... May 8 06:46:12.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:12.012456 systemd[1]: Stopping systemd-resolved.service... May 8 06:46:12.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:12.013914 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 06:46:12.017161 systemd-networkd[642]: eth0: DHCPv6 lease lost May 8 06:46:12.054000 audit: BPF prog-id=9 op=UNLOAD May 8 06:46:12.018369 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 06:46:12.018462 systemd[1]: Stopped systemd-networkd.service. May 8 06:46:12.020333 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 06:46:12.020368 systemd[1]: Closed systemd-networkd.socket. May 8 06:46:12.024032 systemd[1]: Stopping network-cleanup.service... May 8 06:46:12.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:12.027322 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 06:46:12.027409 systemd[1]: Stopped parse-ip-for-networkd.service. May 8 06:46:12.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:12.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:12.028576 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 06:46:12.028635 systemd[1]: Stopped systemd-sysctl.service. May 8 06:46:12.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:12.029863 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 06:46:12.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:12.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:12.029918 systemd[1]: Stopped systemd-modules-load.service. May 8 06:46:12.030922 systemd[1]: Stopping systemd-udevd.service... May 8 06:46:12.037978 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 06:46:12.040389 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 06:46:12.040500 systemd[1]: Stopped systemd-resolved.service. May 8 06:46:12.041841 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 06:46:12.041965 systemd[1]: Stopped systemd-udevd.service. May 8 06:46:12.043866 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 06:46:12.043901 systemd[1]: Closed systemd-udevd-control.socket. May 8 06:46:12.045980 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 06:46:12.046013 systemd[1]: Closed systemd-udevd-kernel.socket. May 8 06:46:12.046602 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 06:46:12.046644 systemd[1]: Stopped dracut-pre-udev.service. May 8 06:46:12.047898 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 06:46:12.047934 systemd[1]: Stopped dracut-cmdline.service. May 8 06:46:12.048977 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 06:46:12.049014 systemd[1]: Stopped dracut-cmdline-ask.service. May 8 06:46:12.050778 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 8 06:46:12.051382 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 06:46:12.051432 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 8 06:46:12.058077 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 06:46:12.058152 systemd[1]: Stopped kmod-static-nodes.service. May 8 06:46:12.059791 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 06:46:12.059830 systemd[1]: Stopped systemd-vconsole-setup.service. May 8 06:46:12.061192 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 8 06:46:12.061657 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 06:46:12.061743 systemd[1]: Stopped network-cleanup.service. May 8 06:46:12.062821 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 06:46:12.062896 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 8 06:46:12.317388 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 06:46:12.317594 systemd[1]: Stopped sysroot-boot.service. May 8 06:46:12.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:12.320480 systemd[1]: Reached target initrd-switch-root.target. May 8 06:46:12.322582 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 06:46:12.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:12.322699 systemd[1]: Stopped initrd-setup-root.service. May 8 06:46:12.326582 systemd[1]: Starting initrd-switch-root.service... May 8 06:46:12.351564 systemd[1]: Switching root. May 8 06:46:12.352000 audit: BPF prog-id=5 op=UNLOAD May 8 06:46:12.352000 audit: BPF prog-id=4 op=UNLOAD May 8 06:46:12.352000 audit: BPF prog-id=3 op=UNLOAD May 8 06:46:12.361000 audit: BPF prog-id=8 op=UNLOAD May 8 06:46:12.361000 audit: BPF prog-id=7 op=UNLOAD May 8 06:46:12.385338 systemd-journald[185]: Journal stopped May 8 06:46:16.732180 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). May 8 06:46:16.732236 kernel: SELinux: Class mctp_socket not defined in policy. May 8 06:46:16.732253 kernel: SELinux: Class anon_inode not defined in policy. May 8 06:46:16.732266 kernel: SELinux: the above unknown classes and permissions will be allowed May 8 06:46:16.732278 kernel: SELinux: policy capability network_peer_controls=1 May 8 06:46:16.732290 kernel: SELinux: policy capability open_perms=1 May 8 06:46:16.732302 kernel: SELinux: policy capability extended_socket_class=1 May 8 06:46:16.732316 kernel: SELinux: policy capability always_check_network=0 May 8 06:46:16.732328 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 06:46:16.732339 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 06:46:16.732350 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 06:46:16.732362 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 06:46:16.732374 systemd[1]: Successfully loaded SELinux policy in 96.527ms. May 8 06:46:16.732391 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.058ms. May 8 06:46:16.732406 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 8 06:46:16.732420 systemd[1]: Detected virtualization kvm. May 8 06:46:16.732433 systemd[1]: Detected architecture x86-64. May 8 06:46:16.732445 systemd[1]: Detected first boot. May 8 06:46:16.732458 systemd[1]: Hostname set to . May 8 06:46:16.732470 systemd[1]: Initializing machine ID from VM UUID. May 8 06:46:16.732487 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 8 06:46:16.732499 systemd[1]: Populated /etc with preset unit settings. May 8 06:46:16.732512 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 06:46:16.732525 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 06:46:16.732539 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 06:46:16.732552 systemd[1]: Queued start job for default target multi-user.target. May 8 06:46:16.732566 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 8 06:46:16.732579 systemd[1]: Created slice system-addon\x2dconfig.slice. May 8 06:46:16.732592 systemd[1]: Created slice system-addon\x2drun.slice. May 8 06:46:16.732605 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 8 06:46:16.732617 systemd[1]: Created slice system-getty.slice. May 8 06:46:16.732629 systemd[1]: Created slice system-modprobe.slice. May 8 06:46:16.732645 systemd[1]: Created slice system-serial\x2dgetty.slice. May 8 06:46:16.732658 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 8 06:46:16.732670 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 8 06:46:16.732684 systemd[1]: Created slice user.slice. May 8 06:46:16.732696 systemd[1]: Started systemd-ask-password-console.path. May 8 06:46:16.732709 systemd[1]: Started systemd-ask-password-wall.path. May 8 06:46:16.732721 systemd[1]: Set up automount boot.automount. May 8 06:46:16.732733 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 8 06:46:16.732746 systemd[1]: Reached target integritysetup.target. May 8 06:46:16.732760 systemd[1]: Reached target remote-cryptsetup.target. May 8 06:46:16.732773 systemd[1]: Reached target remote-fs.target. May 8 06:46:16.732785 systemd[1]: Reached target slices.target. May 8 06:46:16.732798 systemd[1]: Reached target swap.target. May 8 06:46:16.732810 systemd[1]: Reached target torcx.target. May 8 06:46:16.732822 systemd[1]: Reached target veritysetup.target. May 8 06:46:16.732834 systemd[1]: Listening on systemd-coredump.socket. May 8 06:46:16.732847 systemd[1]: Listening on systemd-initctl.socket. May 8 06:46:16.732859 systemd[1]: Listening on systemd-journald-audit.socket. May 8 06:46:16.732871 systemd[1]: Listening on systemd-journald-dev-log.socket. May 8 06:46:16.732885 systemd[1]: Listening on systemd-journald.socket. May 8 06:46:16.732897 systemd[1]: Listening on systemd-networkd.socket. May 8 06:46:16.732909 systemd[1]: Listening on systemd-udevd-control.socket. May 8 06:46:16.732922 systemd[1]: Listening on systemd-udevd-kernel.socket. May 8 06:46:16.732934 systemd[1]: Listening on systemd-userdbd.socket. May 8 06:46:16.732946 systemd[1]: Mounting dev-hugepages.mount... May 8 06:46:16.732959 systemd[1]: Mounting dev-mqueue.mount... May 8 06:46:16.732971 systemd[1]: Mounting media.mount... May 8 06:46:16.732983 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 06:46:16.732997 systemd[1]: Mounting sys-kernel-debug.mount... May 8 06:46:16.733011 systemd[1]: Mounting sys-kernel-tracing.mount... May 8 06:46:16.733024 systemd[1]: Mounting tmp.mount... May 8 06:46:16.733036 systemd[1]: Starting flatcar-tmpfiles.service... May 8 06:46:16.733053 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 06:46:16.733066 systemd[1]: Starting kmod-static-nodes.service... May 8 06:46:16.733078 systemd[1]: Starting modprobe@configfs.service... May 8 06:46:16.733116 systemd[1]: Starting modprobe@dm_mod.service... May 8 06:46:16.733130 systemd[1]: Starting modprobe@drm.service... May 8 06:46:16.733145 systemd[1]: Starting modprobe@efi_pstore.service... May 8 06:46:16.733157 systemd[1]: Starting modprobe@fuse.service... May 8 06:46:16.733169 systemd[1]: Starting modprobe@loop.service... May 8 06:46:16.733181 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 06:46:16.733194 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 8 06:46:16.733210 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 8 06:46:16.733222 systemd[1]: Starting systemd-journald.service... May 8 06:46:16.733234 systemd[1]: Starting systemd-modules-load.service... May 8 06:46:16.733246 systemd[1]: Starting systemd-network-generator.service... May 8 06:46:16.733260 systemd[1]: Starting systemd-remount-fs.service... May 8 06:46:16.733273 systemd[1]: Starting systemd-udev-trigger.service... May 8 06:46:16.733286 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 06:46:16.733298 systemd[1]: Mounted dev-hugepages.mount. May 8 06:46:16.733311 systemd[1]: Mounted dev-mqueue.mount. May 8 06:46:16.733323 systemd[1]: Mounted media.mount. May 8 06:46:16.733336 systemd[1]: Mounted sys-kernel-debug.mount. May 8 06:46:16.733348 systemd[1]: Mounted sys-kernel-tracing.mount. May 8 06:46:16.733360 systemd[1]: Mounted tmp.mount. May 8 06:46:16.733375 systemd[1]: Finished kmod-static-nodes.service. May 8 06:46:16.733387 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 06:46:16.733399 systemd[1]: Finished modprobe@configfs.service. May 8 06:46:16.733411 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 06:46:16.733425 systemd-journald[962]: Journal started May 8 06:46:16.733469 systemd-journald[962]: Runtime Journal (/run/log/journal/3248c86ae54b404fb80f4dc97f80dac5) is 8.0M, max 78.4M, 70.4M free. May 8 06:46:16.573000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 8 06:46:16.573000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 8 06:46:16.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.730000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 8 06:46:16.730000 audit[962]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffe999777a0 a2=4000 a3=7ffe9997783c items=0 ppid=1 pid=962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 06:46:16.730000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 8 06:46:16.740606 systemd[1]: Finished modprobe@dm_mod.service. May 8 06:46:16.740662 systemd[1]: Started systemd-journald.service. May 8 06:46:16.740679 kernel: loop: module loaded May 8 06:46:16.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.740125 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 06:46:16.740289 systemd[1]: Finished modprobe@drm.service. May 8 06:46:16.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.741700 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 06:46:16.741841 systemd[1]: Finished modprobe@efi_pstore.service. May 8 06:46:16.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.742685 systemd[1]: Finished systemd-modules-load.service. May 8 06:46:16.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.743683 systemd[1]: Finished systemd-network-generator.service. May 8 06:46:16.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.745422 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 06:46:16.745562 systemd[1]: Finished modprobe@loop.service. May 8 06:46:16.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.746342 systemd[1]: Finished systemd-remount-fs.service. May 8 06:46:16.747797 kernel: fuse: init (API version 7.34) May 8 06:46:16.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.748653 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 06:46:16.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.749307 systemd[1]: Finished modprobe@fuse.service. May 8 06:46:16.750058 systemd[1]: Reached target network-pre.target. May 8 06:46:16.752573 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 8 06:46:16.759899 systemd[1]: Mounting sys-kernel-config.mount... May 8 06:46:16.760432 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 06:46:16.763171 systemd[1]: Starting systemd-hwdb-update.service... May 8 06:46:16.766976 systemd[1]: Starting systemd-journal-flush.service... May 8 06:46:16.767549 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 06:46:16.771556 systemd[1]: Starting systemd-random-seed.service... May 8 06:46:16.772217 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 06:46:16.774040 systemd[1]: Starting systemd-sysctl.service... May 8 06:46:16.777890 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 8 06:46:16.778777 systemd[1]: Mounted sys-kernel-config.mount. May 8 06:46:16.785215 systemd-journald[962]: Time spent on flushing to /var/log/journal/3248c86ae54b404fb80f4dc97f80dac5 is 49.032ms for 1028 entries. May 8 06:46:16.785215 systemd-journald[962]: System Journal (/var/log/journal/3248c86ae54b404fb80f4dc97f80dac5) is 8.0M, max 584.8M, 576.8M free. May 8 06:46:16.864596 systemd-journald[962]: Received client request to flush runtime journal. May 8 06:46:16.864638 kernel: kauditd_printk_skb: 71 callbacks suppressed May 8 06:46:16.864660 kernel: audit: type=1130 audit(1746686776.802:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.864678 kernel: audit: type=1130 audit(1746686776.817:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.864695 kernel: audit: type=1130 audit(1746686776.845:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.801884 systemd[1]: Finished systemd-random-seed.service. May 8 06:46:16.802613 systemd[1]: Reached target first-boot-complete.target. May 8 06:46:16.816984 systemd[1]: Finished systemd-sysctl.service. May 8 06:46:16.844899 systemd[1]: Finished systemd-udev-trigger.service. May 8 06:46:16.846542 systemd[1]: Starting systemd-udev-settle.service... May 8 06:46:16.865797 systemd[1]: Finished systemd-journal-flush.service. May 8 06:46:16.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.875137 kernel: audit: type=1130 audit(1746686776.866:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.873311 systemd[1]: Finished flatcar-tmpfiles.service. May 8 06:46:16.875286 udevadm[1011]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 06:46:16.874891 systemd[1]: Starting systemd-sysusers.service... May 8 06:46:16.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.881127 kernel: audit: type=1130 audit(1746686776.873:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.916255 systemd[1]: Finished systemd-sysusers.service. May 8 06:46:16.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.917944 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 8 06:46:16.923216 kernel: audit: type=1130 audit(1746686776.916:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:16.963135 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 8 06:46:16.973120 kernel: audit: type=1130 audit(1746686776.963:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:17.477565 systemd[1]: Finished systemd-hwdb-update.service. May 8 06:46:17.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:17.491611 systemd[1]: Starting systemd-udevd.service... May 8 06:46:17.492358 kernel: audit: type=1130 audit(1746686777.478:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:17.535723 systemd-udevd[1023]: Using default interface naming scheme 'v252'. May 8 06:46:17.597856 systemd[1]: Started systemd-udevd.service. May 8 06:46:17.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:17.602637 systemd[1]: Starting systemd-networkd.service... May 8 06:46:17.614998 kernel: audit: type=1130 audit(1746686777.599:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:17.629443 systemd[1]: Starting systemd-userdbd.service... May 8 06:46:17.693837 systemd[1]: Found device dev-ttyS0.device. May 8 06:46:17.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:17.737626 systemd[1]: Started systemd-userdbd.service. May 8 06:46:17.744165 kernel: audit: type=1130 audit(1746686777.738:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:17.759755 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 8 06:46:17.773180 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 06:46:17.790118 kernel: ACPI: button: Power Button [PWRF] May 8 06:46:17.777000 audit[1041]: AVC avc: denied { confidentiality } for pid=1041 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 8 06:46:17.777000 audit[1041]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e9498ac850 a1=338ac a2=7fd4ee6a2bc5 a3=5 items=110 ppid=1023 pid=1041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 06:46:17.777000 audit: CWD cwd="/" May 8 06:46:17.777000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=1 name=(null) inode=13095 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=2 name=(null) inode=13095 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=3 name=(null) inode=13096 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=4 name=(null) inode=13095 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=5 name=(null) inode=13097 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=6 name=(null) inode=13095 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=7 name=(null) inode=13098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=8 name=(null) inode=13098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=9 name=(null) inode=13099 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=10 name=(null) inode=13098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=11 name=(null) inode=13100 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=12 name=(null) inode=13098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=13 name=(null) inode=13101 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=14 name=(null) inode=13098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=15 name=(null) inode=13102 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=16 name=(null) inode=13098 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=17 name=(null) inode=13103 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=18 name=(null) inode=13095 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=19 name=(null) inode=13104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=20 name=(null) inode=13104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=21 name=(null) inode=13105 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=22 name=(null) inode=13104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=23 name=(null) inode=13106 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=24 name=(null) inode=13104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=25 name=(null) inode=13107 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=26 name=(null) inode=13104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=27 name=(null) inode=13108 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=28 name=(null) inode=13104 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=29 name=(null) inode=13109 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=30 name=(null) inode=13095 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=31 name=(null) inode=13110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=32 name=(null) inode=13110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=33 name=(null) inode=13111 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=34 name=(null) inode=13110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=35 name=(null) inode=13112 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=36 name=(null) inode=13110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=37 name=(null) inode=13113 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=38 name=(null) inode=13110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=39 name=(null) inode=13114 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=40 name=(null) inode=13110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=41 name=(null) inode=13115 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=42 name=(null) inode=13095 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=43 name=(null) inode=13116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=44 name=(null) inode=13116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=45 name=(null) inode=13117 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=46 name=(null) inode=13116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=47 name=(null) inode=13118 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=48 name=(null) inode=13116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=49 name=(null) inode=13119 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=50 name=(null) inode=13116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=51 name=(null) inode=13120 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=52 name=(null) inode=13116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=53 name=(null) inode=13121 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=55 name=(null) inode=13122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=56 name=(null) inode=13122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=57 name=(null) inode=13123 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=58 name=(null) inode=13122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=59 name=(null) inode=13124 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=60 name=(null) inode=13122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=61 name=(null) inode=13125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=62 name=(null) inode=13125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=63 name=(null) inode=13126 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=64 name=(null) inode=13125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=65 name=(null) inode=13127 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=66 name=(null) inode=13125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=67 name=(null) inode=13128 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=68 name=(null) inode=13125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=69 name=(null) inode=13129 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=70 name=(null) inode=13125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=71 name=(null) inode=13130 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=72 name=(null) inode=13122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=73 name=(null) inode=13131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=74 name=(null) inode=13131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=75 name=(null) inode=13132 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=76 name=(null) inode=13131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=77 name=(null) inode=13133 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=78 name=(null) inode=13131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=79 name=(null) inode=13134 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=80 name=(null) inode=13131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=81 name=(null) inode=13135 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=82 name=(null) inode=13131 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=83 name=(null) inode=13136 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=84 name=(null) inode=13122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=85 name=(null) inode=13137 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=86 name=(null) inode=13137 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=87 name=(null) inode=13138 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=88 name=(null) inode=13137 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=89 name=(null) inode=13139 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=90 name=(null) inode=13137 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=91 name=(null) inode=13140 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=92 name=(null) inode=13137 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=93 name=(null) inode=13141 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=94 name=(null) inode=13137 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=95 name=(null) inode=13142 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=96 name=(null) inode=13122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=97 name=(null) inode=13143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=98 name=(null) inode=13143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=99 name=(null) inode=13144 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=100 name=(null) inode=13143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=101 name=(null) inode=13145 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=102 name=(null) inode=13143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=103 name=(null) inode=13146 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=104 name=(null) inode=13143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=105 name=(null) inode=13147 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=106 name=(null) inode=13143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=107 name=(null) inode=13148 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PATH item=109 name=(null) inode=14294 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 06:46:17.777000 audit: PROCTITLE proctitle="(udev-worker)" May 8 06:46:17.905151 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 8 06:46:17.939163 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 8 06:46:17.966149 kernel: mousedev: PS/2 mouse device common for all mice May 8 06:46:18.085066 systemd[1]: Finished systemd-udev-settle.service. May 8 06:46:18.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:18.089015 systemd[1]: Starting lvm2-activation-early.service... May 8 06:46:18.274443 lvm[1058]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 06:46:18.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:18.277090 systemd-networkd[1026]: lo: Link UP May 8 06:46:18.277145 systemd-networkd[1026]: lo: Gained carrier May 8 06:46:18.278178 systemd-networkd[1026]: Enumeration completed May 8 06:46:18.278414 systemd[1]: Started systemd-networkd.service. May 8 06:46:18.279999 systemd-networkd[1026]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 06:46:18.285462 systemd-networkd[1026]: eth0: Link UP May 8 06:46:18.285655 systemd-networkd[1026]: eth0: Gained carrier May 8 06:46:18.300298 systemd-networkd[1026]: eth0: DHCPv4 address 172.24.4.62/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 8 06:46:18.309079 systemd[1]: Finished lvm2-activation-early.service. May 8 06:46:18.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:18.310670 systemd[1]: Reached target cryptsetup.target. May 8 06:46:18.314538 systemd[1]: Starting lvm2-activation.service... May 8 06:46:18.325022 lvm[1060]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 06:46:18.366077 systemd[1]: Finished lvm2-activation.service. May 8 06:46:18.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:18.367551 systemd[1]: Reached target local-fs-pre.target. May 8 06:46:18.368766 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 06:46:18.368857 systemd[1]: Reached target local-fs.target. May 8 06:46:18.369984 systemd[1]: Reached target machines.target. May 8 06:46:18.374795 systemd[1]: Starting ldconfig.service... May 8 06:46:18.377543 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 06:46:18.377861 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 06:46:18.381467 systemd[1]: Starting systemd-boot-update.service... May 8 06:46:18.387251 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 8 06:46:18.391050 systemd[1]: Starting systemd-machine-id-commit.service... May 8 06:46:18.394765 systemd[1]: Starting systemd-sysext.service... May 8 06:46:18.407828 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1063 (bootctl) May 8 06:46:18.410329 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 8 06:46:18.445371 systemd[1]: Unmounting usr-share-oem.mount... May 8 06:46:18.452042 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 8 06:46:18.452271 systemd[1]: Unmounted usr-share-oem.mount. May 8 06:46:18.517332 kernel: loop0: detected capacity change from 0 to 210664 May 8 06:46:18.518196 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 8 06:46:18.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:18.689240 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 06:46:18.690670 systemd[1]: Finished systemd-machine-id-commit.service. May 8 06:46:18.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:18.731167 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 06:46:18.765141 kernel: loop1: detected capacity change from 0 to 210664 May 8 06:46:18.806278 (sd-sysext)[1082]: Using extensions 'kubernetes'. May 8 06:46:18.806984 (sd-sysext)[1082]: Merged extensions into '/usr'. May 8 06:46:18.842488 systemd-fsck[1078]: fsck.fat 4.2 (2021-01-31) May 8 06:46:18.842488 systemd-fsck[1078]: /dev/vda1: 790 files, 120710/258078 clusters May 8 06:46:18.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:18.846141 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 8 06:46:18.848562 systemd[1]: Mounting boot.mount... May 8 06:46:18.849049 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 06:46:18.850456 systemd[1]: Mounting usr-share-oem.mount... May 8 06:46:18.851313 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 06:46:18.852596 systemd[1]: Starting modprobe@dm_mod.service... May 8 06:46:18.859123 systemd[1]: Starting modprobe@efi_pstore.service... May 8 06:46:18.860705 systemd[1]: Starting modprobe@loop.service... May 8 06:46:18.863229 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 06:46:18.863388 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 06:46:18.863527 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 06:46:18.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:18.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:18.866131 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 06:46:18.866287 systemd[1]: Finished modprobe@dm_mod.service. May 8 06:46:18.867201 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 06:46:18.867342 systemd[1]: Finished modprobe@efi_pstore.service. May 8 06:46:18.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:18.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:18.868264 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 06:46:18.868427 systemd[1]: Finished modprobe@loop.service. May 8 06:46:18.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:18.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:18.874898 systemd[1]: Mounted usr-share-oem.mount. May 8 06:46:18.879460 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 06:46:18.879584 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 06:46:18.881553 systemd[1]: Finished systemd-sysext.service. May 8 06:46:18.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:18.883709 systemd[1]: Starting ensure-sysext.service... May 8 06:46:18.885348 systemd[1]: Starting systemd-tmpfiles-setup.service... May 8 06:46:18.894783 systemd[1]: Mounted boot.mount. May 8 06:46:18.907484 systemd[1]: Reloading. May 8 06:46:18.910042 systemd-tmpfiles[1100]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 8 06:46:18.914211 systemd-tmpfiles[1100]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 06:46:18.916566 systemd-tmpfiles[1100]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 06:46:19.007386 /usr/lib/systemd/system-generators/torcx-generator[1123]: time="2025-05-08T06:46:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 06:46:19.007419 /usr/lib/systemd/system-generators/torcx-generator[1123]: time="2025-05-08T06:46:19Z" level=info msg="torcx already run" May 8 06:46:19.148241 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 06:46:19.148259 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 06:46:19.184309 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 06:46:19.263685 systemd[1]: Finished systemd-boot-update.service. May 8 06:46:19.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.265538 systemd[1]: Finished systemd-tmpfiles-setup.service. May 8 06:46:19.268196 systemd[1]: Starting audit-rules.service... May 8 06:46:19.270153 systemd[1]: Starting clean-ca-certificates.service... May 8 06:46:19.272029 systemd[1]: Starting systemd-journal-catalog-update.service... May 8 06:46:19.274441 systemd[1]: Starting systemd-resolved.service... May 8 06:46:19.281878 systemd[1]: Starting systemd-timesyncd.service... May 8 06:46:19.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.289318 systemd[1]: Starting systemd-update-utmp.service... May 8 06:46:19.292399 systemd[1]: Finished clean-ca-certificates.service. May 8 06:46:19.296393 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 06:46:19.320000 audit[1181]: SYSTEM_BOOT pid=1181 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 8 06:46:19.332788 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 06:46:19.334107 systemd[1]: Starting modprobe@dm_mod.service... May 8 06:46:19.335855 systemd[1]: Starting modprobe@efi_pstore.service... May 8 06:46:19.338570 systemd[1]: Starting modprobe@loop.service... May 8 06:46:19.339175 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 06:46:19.339314 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 06:46:19.339466 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 06:46:19.343994 systemd[1]: Finished systemd-update-utmp.service. May 8 06:46:19.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.346054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 06:46:19.346262 systemd[1]: Finished modprobe@dm_mod.service. May 8 06:46:19.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.348935 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 06:46:19.349313 systemd[1]: Finished modprobe@efi_pstore.service. May 8 06:46:19.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.353575 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 06:46:19.357059 systemd[1]: Starting modprobe@dm_mod.service... May 8 06:46:19.359206 systemd[1]: Starting modprobe@efi_pstore.service... May 8 06:46:19.360062 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 06:46:19.360322 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 06:46:19.360496 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 06:46:19.364521 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 06:46:19.364712 systemd[1]: Finished modprobe@loop.service. May 8 06:46:19.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.365733 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 06:46:19.365888 systemd[1]: Finished modprobe@dm_mod.service. May 8 06:46:19.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.366918 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 06:46:19.373611 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 06:46:19.375575 systemd[1]: Starting modprobe@dm_mod.service... May 8 06:46:19.377507 systemd[1]: Starting modprobe@drm.service... May 8 06:46:19.381152 systemd[1]: Starting modprobe@loop.service... May 8 06:46:19.381735 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 06:46:19.381881 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 06:46:19.383339 systemd[1]: Starting systemd-networkd-wait-online.service... May 8 06:46:19.384440 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 06:46:19.387455 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 06:46:19.387638 systemd[1]: Finished modprobe@efi_pstore.service. May 8 06:46:19.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.388867 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 06:46:19.390369 systemd[1]: Finished ensure-sysext.service. May 8 06:46:19.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.401014 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 06:46:19.401192 systemd[1]: Finished modprobe@drm.service. May 8 06:46:19.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.403509 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 06:46:19.403691 systemd[1]: Finished modprobe@dm_mod.service. May 8 06:46:19.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.404620 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 06:46:19.404780 systemd[1]: Finished modprobe@loop.service. May 8 06:46:19.405324 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 06:46:19.415185 ldconfig[1062]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 06:46:19.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.415788 systemd[1]: Finished systemd-journal-catalog-update.service. May 8 06:46:19.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.427844 systemd[1]: Finished ldconfig.service. May 8 06:46:19.429608 systemd[1]: Starting systemd-update-done.service... May 8 06:46:19.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 06:46:19.441082 systemd[1]: Finished systemd-update-done.service. May 8 06:46:19.451075 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 06:46:19.451123 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 06:46:19.462000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 8 06:46:19.462000 audit[1221]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff104d1440 a2=420 a3=0 items=0 ppid=1175 pid=1221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 06:46:19.462000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 8 06:46:19.462443 augenrules[1221]: No rules May 8 06:46:19.463169 systemd[1]: Finished audit-rules.service. May 8 06:46:19.467299 systemd-resolved[1178]: Positive Trust Anchors: May 8 06:46:19.467533 systemd-resolved[1178]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 06:46:19.467647 systemd-resolved[1178]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 8 06:46:19.478462 systemd-resolved[1178]: Using system hostname 'ci-3510-3-7-n-500342624e.novalocal'. May 8 06:46:19.478982 systemd[1]: Started systemd-timesyncd.service. May 8 06:46:19.479693 systemd[1]: Reached target time-set.target. May 8 06:46:19.481365 systemd[1]: Started systemd-resolved.service. May 8 06:46:19.481935 systemd[1]: Reached target network.target. May 8 06:46:19.482488 systemd[1]: Reached target nss-lookup.target. May 8 06:46:19.483009 systemd[1]: Reached target sysinit.target. May 8 06:46:19.483553 systemd[1]: Started motdgen.path. May 8 06:46:19.483990 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 8 06:46:19.484673 systemd[1]: Started logrotate.timer. May 8 06:46:19.485223 systemd[1]: Started mdadm.timer. May 8 06:46:19.485664 systemd[1]: Started systemd-tmpfiles-clean.timer. May 8 06:46:19.486181 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 06:46:19.486214 systemd[1]: Reached target paths.target. May 8 06:46:19.486663 systemd[1]: Reached target timers.target. May 8 06:46:19.487383 systemd[1]: Listening on dbus.socket. May 8 06:46:19.488855 systemd[1]: Starting docker.socket... May 8 06:46:19.491059 systemd[1]: Listening on sshd.socket. May 8 06:46:19.491712 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 06:46:19.492080 systemd[1]: Listening on docker.socket. May 8 06:46:19.492615 systemd[1]: Reached target sockets.target. May 8 06:46:19.493078 systemd[1]: Reached target basic.target. May 8 06:46:19.493674 systemd[1]: System is tainted: cgroupsv1 May 8 06:46:19.493722 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 8 06:46:19.493745 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 8 06:46:19.494792 systemd[1]: Starting containerd.service... May 8 06:46:19.496198 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 8 06:46:19.501904 systemd[1]: Starting dbus.service... May 8 06:46:19.503664 systemd[1]: Starting enable-oem-cloudinit.service... May 8 06:46:19.505614 systemd[1]: Starting extend-filesystems.service... May 8 06:46:19.506941 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 8 06:46:19.508518 systemd[1]: Starting motdgen.service... May 8 06:46:19.512778 jq[1236]: false May 8 06:46:19.512955 systemd[1]: Starting ssh-key-proc-cmdline.service... May 8 06:46:19.516749 systemd[1]: Starting sshd-keygen.service... May 8 06:46:19.520916 systemd[1]: Starting systemd-logind.service... May 8 06:46:19.521486 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 06:46:19.521549 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 06:46:19.538615 jq[1246]: true May 8 06:46:19.523183 systemd[1]: Starting update-engine.service... May 8 06:46:19.527323 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 8 06:46:19.529611 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 06:46:19.529856 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 8 06:46:19.543053 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 06:46:19.543286 systemd[1]: Finished ssh-key-proc-cmdline.service. May 8 06:46:19.557418 jq[1253]: true May 8 06:46:19.578329 systemd-timesyncd[1180]: Contacted time server 159.203.158.197:123 (0.flatcar.pool.ntp.org). May 8 06:46:19.578388 systemd-timesyncd[1180]: Initial clock synchronization to Thu 2025-05-08 06:46:19.281711 UTC. May 8 06:46:19.591380 extend-filesystems[1237]: Found loop1 May 8 06:46:19.594358 extend-filesystems[1237]: Found vda May 8 06:46:19.594376 systemd[1]: motdgen.service: Deactivated successfully. May 8 06:46:19.594599 systemd[1]: Finished motdgen.service. May 8 06:46:19.595041 extend-filesystems[1237]: Found vda1 May 8 06:46:19.595734 extend-filesystems[1237]: Found vda2 May 8 06:46:19.595734 extend-filesystems[1237]: Found vda3 May 8 06:46:19.604139 extend-filesystems[1237]: Found usr May 8 06:46:19.604139 extend-filesystems[1237]: Found vda4 May 8 06:46:19.604139 extend-filesystems[1237]: Found vda6 May 8 06:46:19.604139 extend-filesystems[1237]: Found vda7 May 8 06:46:19.604139 extend-filesystems[1237]: Found vda9 May 8 06:46:19.604139 extend-filesystems[1237]: Checking size of /dev/vda9 May 8 06:46:19.622427 dbus-daemon[1235]: [system] SELinux support is enabled May 8 06:46:19.622635 systemd[1]: Started dbus.service. May 8 06:46:19.625282 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 06:46:19.625310 systemd[1]: Reached target system-config.target. May 8 06:46:19.625805 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 06:46:19.625824 systemd[1]: Reached target user-config.target. May 8 06:46:19.636553 extend-filesystems[1237]: Resized partition /dev/vda9 May 8 06:46:19.641130 env[1251]: time="2025-05-08T06:46:19.641058071Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 8 06:46:19.646544 extend-filesystems[1290]: resize2fs 1.46.5 (30-Dec-2021) May 8 06:46:19.689015 bash[1285]: Updated "/home/core/.ssh/authorized_keys" May 8 06:46:19.689813 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 8 06:46:19.694123 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 8 06:46:19.701123 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 8 06:46:19.715844 env[1251]: time="2025-05-08T06:46:19.715795611Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 06:46:19.733397 update_engine[1245]: I0508 06:46:19.724360 1245 main.cc:92] Flatcar Update Engine starting May 8 06:46:19.733616 env[1251]: time="2025-05-08T06:46:19.733418433Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 06:46:19.733647 extend-filesystems[1290]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 06:46:19.733647 extend-filesystems[1290]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 06:46:19.733647 extend-filesystems[1290]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 8 06:46:19.740513 extend-filesystems[1237]: Resized filesystem in /dev/vda9 May 8 06:46:19.744655 env[1251]: time="2025-05-08T06:46:19.738904061Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.180-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 06:46:19.744655 env[1251]: time="2025-05-08T06:46:19.738941912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 06:46:19.734288 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 06:46:19.744777 update_engine[1245]: I0508 06:46:19.736282 1245 update_check_scheduler.cc:74] Next update check in 4m49s May 8 06:46:19.734501 systemd[1]: Finished extend-filesystems.service. May 8 06:46:19.734821 systemd-logind[1244]: Watching system buttons on /dev/input/event1 (Power Button) May 8 06:46:19.734838 systemd-logind[1244]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 06:46:19.737238 systemd-logind[1244]: New seat seat0. May 8 06:46:19.737549 systemd[1]: Started update-engine.service. May 8 06:46:19.741241 systemd[1]: Started locksmithd.service. May 8 06:46:19.745496 systemd[1]: Started systemd-logind.service. May 8 06:46:19.748378 env[1251]: time="2025-05-08T06:46:19.748333235Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 06:46:19.748434 env[1251]: time="2025-05-08T06:46:19.748376527Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 06:46:19.748434 env[1251]: time="2025-05-08T06:46:19.748422693Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 8 06:46:19.748485 env[1251]: time="2025-05-08T06:46:19.748437070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 06:46:19.748554 env[1251]: time="2025-05-08T06:46:19.748530465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 06:46:19.748803 env[1251]: time="2025-05-08T06:46:19.748780033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 06:46:19.748975 env[1251]: time="2025-05-08T06:46:19.748947347Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 06:46:19.749013 env[1251]: time="2025-05-08T06:46:19.748975841Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 06:46:19.749051 env[1251]: time="2025-05-08T06:46:19.749029722Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 8 06:46:19.749051 env[1251]: time="2025-05-08T06:46:19.749047054Z" level=info msg="metadata content store policy set" policy=shared May 8 06:46:19.759555 env[1251]: time="2025-05-08T06:46:19.759526288Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 06:46:19.759611 env[1251]: time="2025-05-08T06:46:19.759559230Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 06:46:19.759643 env[1251]: time="2025-05-08T06:46:19.759575080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 06:46:19.759668 env[1251]: time="2025-05-08T06:46:19.759645942Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 06:46:19.759723 env[1251]: time="2025-05-08T06:46:19.759664728Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 06:46:19.759757 env[1251]: time="2025-05-08T06:46:19.759724951Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 06:46:19.759757 env[1251]: time="2025-05-08T06:46:19.759741031Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 06:46:19.759809 env[1251]: time="2025-05-08T06:46:19.759773932Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 06:46:19.759809 env[1251]: time="2025-05-08T06:46:19.759792187Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 8 06:46:19.759854 env[1251]: time="2025-05-08T06:46:19.759807586Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 06:46:19.759854 env[1251]: time="2025-05-08T06:46:19.759822494Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 06:46:19.759901 env[1251]: time="2025-05-08T06:46:19.759852740Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 06:46:19.759990 env[1251]: time="2025-05-08T06:46:19.759967506Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 06:46:19.760113 env[1251]: time="2025-05-08T06:46:19.760074496Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 06:46:19.760604 env[1251]: time="2025-05-08T06:46:19.760583521Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 06:46:19.760643 env[1251]: time="2025-05-08T06:46:19.760615050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 06:46:19.760643 env[1251]: time="2025-05-08T06:46:19.760631531Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 06:46:19.760715 env[1251]: time="2025-05-08T06:46:19.760692545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 06:46:19.760748 env[1251]: time="2025-05-08T06:46:19.760714777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 06:46:19.760785 env[1251]: time="2025-05-08T06:46:19.760749041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 06:46:19.760785 env[1251]: time="2025-05-08T06:46:19.760763468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 06:46:19.760785 env[1251]: time="2025-05-08T06:46:19.760776703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 06:46:19.760856 env[1251]: time="2025-05-08T06:46:19.760790048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 06:46:19.760856 env[1251]: time="2025-05-08T06:46:19.760825905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 06:46:19.760856 env[1251]: time="2025-05-08T06:46:19.760841214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 06:46:19.760928 env[1251]: time="2025-05-08T06:46:19.760856503Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 06:46:19.761045 env[1251]: time="2025-05-08T06:46:19.761023266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 06:46:19.761083 env[1251]: time="2025-05-08T06:46:19.761046549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 06:46:19.761083 env[1251]: time="2025-05-08T06:46:19.761061447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 06:46:19.761148 env[1251]: time="2025-05-08T06:46:19.761120498Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 06:46:19.761174 env[1251]: time="2025-05-08T06:46:19.761138682Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 8 06:46:19.761206 env[1251]: time="2025-05-08T06:46:19.761169189Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 06:46:19.761206 env[1251]: time="2025-05-08T06:46:19.761190028Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 8 06:46:19.761253 env[1251]: time="2025-05-08T06:46:19.761224523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 06:46:19.761546 env[1251]: time="2025-05-08T06:46:19.761466527Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 06:46:19.765297 env[1251]: time="2025-05-08T06:46:19.761552548Z" level=info msg="Connect containerd service" May 8 06:46:19.765297 env[1251]: time="2025-05-08T06:46:19.761596280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 06:46:19.765297 env[1251]: time="2025-05-08T06:46:19.762354733Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 06:46:19.765297 env[1251]: time="2025-05-08T06:46:19.762587429Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 06:46:19.765297 env[1251]: time="2025-05-08T06:46:19.762630069Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 06:46:19.765297 env[1251]: time="2025-05-08T06:46:19.762676276Z" level=info msg="containerd successfully booted in 0.134426s" May 8 06:46:19.765297 env[1251]: time="2025-05-08T06:46:19.763690538Z" level=info msg="Start subscribing containerd event" May 8 06:46:19.765297 env[1251]: time="2025-05-08T06:46:19.763763034Z" level=info msg="Start recovering state" May 8 06:46:19.765297 env[1251]: time="2025-05-08T06:46:19.763841982Z" level=info msg="Start event monitor" May 8 06:46:19.765297 env[1251]: time="2025-05-08T06:46:19.763859946Z" level=info msg="Start snapshots syncer" May 8 06:46:19.765297 env[1251]: time="2025-05-08T06:46:19.763894220Z" level=info msg="Start cni network conf syncer for default" May 8 06:46:19.765297 env[1251]: time="2025-05-08T06:46:19.763919818Z" level=info msg="Start streaming server" May 8 06:46:19.762884 systemd[1]: Started containerd.service. May 8 06:46:19.906939 locksmithd[1297]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 06:46:20.202568 systemd-networkd[1026]: eth0: Gained IPv6LL May 8 06:46:20.203010 systemd[1]: Created slice system-sshd.slice. May 8 06:46:20.205874 systemd[1]: Finished systemd-networkd-wait-online.service. May 8 06:46:20.206608 systemd[1]: Reached target network-online.target. May 8 06:46:20.208954 systemd[1]: Starting kubelet.service... May 8 06:46:20.519454 sshd_keygen[1261]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 06:46:20.553816 systemd[1]: Finished sshd-keygen.service. May 8 06:46:20.555821 systemd[1]: Starting issuegen.service... May 8 06:46:20.557327 systemd[1]: Started sshd@0-172.24.4.62:22-172.24.4.1:41508.service. May 8 06:46:20.566144 systemd[1]: issuegen.service: Deactivated successfully. May 8 06:46:20.566367 systemd[1]: Finished issuegen.service. May 8 06:46:20.568433 systemd[1]: Starting systemd-user-sessions.service... May 8 06:46:20.576200 systemd[1]: Finished systemd-user-sessions.service. May 8 06:46:20.578208 systemd[1]: Started getty@tty1.service. May 8 06:46:20.580877 systemd[1]: Started serial-getty@ttyS0.service. May 8 06:46:20.581647 systemd[1]: Reached target getty.target. May 8 06:46:21.653169 sshd[1317]: Accepted publickey for core from 172.24.4.1 port 41508 ssh2: RSA SHA256:Tpa25K+sKxMFZYAwx5LzEEIakI/UEg3CT7ZY/hiJt50 May 8 06:46:21.657986 sshd[1317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 06:46:21.690211 systemd-logind[1244]: New session 1 of user core. May 8 06:46:21.693499 systemd[1]: Created slice user-500.slice. May 8 06:46:21.697035 systemd[1]: Starting user-runtime-dir@500.service... May 8 06:46:21.719003 systemd[1]: Finished user-runtime-dir@500.service. May 8 06:46:21.720770 systemd[1]: Starting user@500.service... May 8 06:46:21.727859 (systemd)[1330]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 06:46:21.744306 systemd[1]: Started kubelet.service. May 8 06:46:21.829738 systemd[1330]: Queued start job for default target default.target. May 8 06:46:21.830599 systemd[1330]: Reached target paths.target. May 8 06:46:21.830706 systemd[1330]: Reached target sockets.target. May 8 06:46:21.830790 systemd[1330]: Reached target timers.target. May 8 06:46:21.830865 systemd[1330]: Reached target basic.target. May 8 06:46:21.830968 systemd[1330]: Reached target default.target. May 8 06:46:21.831071 systemd[1330]: Startup finished in 94ms. May 8 06:46:21.831279 systemd[1]: Started user@500.service. May 8 06:46:21.832808 systemd[1]: Started session-1.scope. May 8 06:46:22.354706 systemd[1]: Started sshd@1-172.24.4.62:22-172.24.4.1:41518.service. May 8 06:46:23.030285 kubelet[1337]: E0508 06:46:23.030184 1337 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 06:46:23.034016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 06:46:23.034419 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 06:46:23.676771 sshd[1350]: Accepted publickey for core from 172.24.4.1 port 41518 ssh2: RSA SHA256:Tpa25K+sKxMFZYAwx5LzEEIakI/UEg3CT7ZY/hiJt50 May 8 06:46:23.679560 sshd[1350]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 06:46:23.689200 systemd-logind[1244]: New session 2 of user core. May 8 06:46:23.690888 systemd[1]: Started session-2.scope. May 8 06:46:24.315916 sshd[1350]: pam_unix(sshd:session): session closed for user core May 8 06:46:24.322195 systemd[1]: Started sshd@2-172.24.4.62:22-172.24.4.1:46916.service. May 8 06:46:24.328571 systemd[1]: sshd@1-172.24.4.62:22-172.24.4.1:41518.service: Deactivated successfully. May 8 06:46:24.330248 systemd[1]: session-2.scope: Deactivated successfully. May 8 06:46:24.335748 systemd-logind[1244]: Session 2 logged out. Waiting for processes to exit. May 8 06:46:24.337716 systemd-logind[1244]: Removed session 2. May 8 06:46:25.469929 sshd[1357]: Accepted publickey for core from 172.24.4.1 port 46916 ssh2: RSA SHA256:Tpa25K+sKxMFZYAwx5LzEEIakI/UEg3CT7ZY/hiJt50 May 8 06:46:25.473528 sshd[1357]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 06:46:25.483271 systemd-logind[1244]: New session 3 of user core. May 8 06:46:25.483988 systemd[1]: Started session-3.scope. May 8 06:46:26.109734 sshd[1357]: pam_unix(sshd:session): session closed for user core May 8 06:46:26.115252 systemd[1]: sshd@2-172.24.4.62:22-172.24.4.1:46916.service: Deactivated successfully. May 8 06:46:26.117285 systemd[1]: session-3.scope: Deactivated successfully. May 8 06:46:26.117918 systemd-logind[1244]: Session 3 logged out. Waiting for processes to exit. May 8 06:46:26.119884 systemd-logind[1244]: Removed session 3. May 8 06:46:26.631574 coreos-metadata[1231]: May 08 06:46:26.631 WARN failed to locate config-drive, using the metadata service API instead May 8 06:46:26.740145 coreos-metadata[1231]: May 08 06:46:26.740 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 8 06:46:26.926994 coreos-metadata[1231]: May 08 06:46:26.926 INFO Fetch successful May 8 06:46:26.927388 coreos-metadata[1231]: May 08 06:46:26.927 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 8 06:46:26.942446 coreos-metadata[1231]: May 08 06:46:26.942 INFO Fetch successful May 8 06:46:26.948575 unknown[1231]: wrote ssh authorized keys file for user: core May 8 06:46:26.985309 update-ssh-keys[1369]: Updated "/home/core/.ssh/authorized_keys" May 8 06:46:26.986834 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 8 06:46:26.987592 systemd[1]: Reached target multi-user.target. May 8 06:46:26.990560 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 8 06:46:27.011223 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 8 06:46:27.011698 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 8 06:46:27.012605 systemd[1]: Startup finished in 8.157s (kernel) + 14.180s (userspace) = 22.337s. May 8 06:46:33.287046 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 06:46:33.287528 systemd[1]: Stopped kubelet.service. May 8 06:46:33.290580 systemd[1]: Starting kubelet.service... May 8 06:46:33.523009 systemd[1]: Started kubelet.service. May 8 06:46:33.678953 kubelet[1382]: E0508 06:46:33.678691 1382 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 06:46:33.686055 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 06:46:33.686573 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 06:46:36.018834 systemd[1]: Started sshd@3-172.24.4.62:22-172.24.4.1:39490.service. May 8 06:46:37.418842 sshd[1390]: Accepted publickey for core from 172.24.4.1 port 39490 ssh2: RSA SHA256:Tpa25K+sKxMFZYAwx5LzEEIakI/UEg3CT7ZY/hiJt50 May 8 06:46:37.422031 sshd[1390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 06:46:37.432530 systemd[1]: Started session-4.scope. May 8 06:46:37.435203 systemd-logind[1244]: New session 4 of user core. May 8 06:46:38.064345 sshd[1390]: pam_unix(sshd:session): session closed for user core May 8 06:46:38.065622 systemd[1]: Started sshd@4-172.24.4.62:22-172.24.4.1:39500.service. May 8 06:46:38.071712 systemd[1]: sshd@3-172.24.4.62:22-172.24.4.1:39490.service: Deactivated successfully. May 8 06:46:38.073215 systemd[1]: session-4.scope: Deactivated successfully. May 8 06:46:38.076834 systemd-logind[1244]: Session 4 logged out. Waiting for processes to exit. May 8 06:46:38.078891 systemd-logind[1244]: Removed session 4. May 8 06:46:39.219903 sshd[1395]: Accepted publickey for core from 172.24.4.1 port 39500 ssh2: RSA SHA256:Tpa25K+sKxMFZYAwx5LzEEIakI/UEg3CT7ZY/hiJt50 May 8 06:46:39.222460 sshd[1395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 06:46:39.232198 systemd-logind[1244]: New session 5 of user core. May 8 06:46:39.233509 systemd[1]: Started session-5.scope. May 8 06:46:39.862758 sshd[1395]: pam_unix(sshd:session): session closed for user core May 8 06:46:39.867019 systemd[1]: Started sshd@5-172.24.4.62:22-172.24.4.1:39508.service. May 8 06:46:39.874382 systemd[1]: sshd@4-172.24.4.62:22-172.24.4.1:39500.service: Deactivated successfully. May 8 06:46:39.877332 systemd[1]: session-5.scope: Deactivated successfully. May 8 06:46:39.878593 systemd-logind[1244]: Session 5 logged out. Waiting for processes to exit. May 8 06:46:39.881193 systemd-logind[1244]: Removed session 5. May 8 06:46:41.066579 sshd[1402]: Accepted publickey for core from 172.24.4.1 port 39508 ssh2: RSA SHA256:Tpa25K+sKxMFZYAwx5LzEEIakI/UEg3CT7ZY/hiJt50 May 8 06:46:41.069178 sshd[1402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 06:46:41.079684 systemd-logind[1244]: New session 6 of user core. May 8 06:46:41.080406 systemd[1]: Started session-6.scope. May 8 06:46:41.607895 sshd[1402]: pam_unix(sshd:session): session closed for user core May 8 06:46:41.613829 systemd[1]: Started sshd@6-172.24.4.62:22-172.24.4.1:39522.service. May 8 06:46:41.619727 systemd[1]: sshd@5-172.24.4.62:22-172.24.4.1:39508.service: Deactivated successfully. May 8 06:46:41.624563 systemd-logind[1244]: Session 6 logged out. Waiting for processes to exit. May 8 06:46:41.624677 systemd[1]: session-6.scope: Deactivated successfully. May 8 06:46:41.629997 systemd-logind[1244]: Removed session 6. May 8 06:46:42.848978 sshd[1409]: Accepted publickey for core from 172.24.4.1 port 39522 ssh2: RSA SHA256:Tpa25K+sKxMFZYAwx5LzEEIakI/UEg3CT7ZY/hiJt50 May 8 06:46:42.852044 sshd[1409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 06:46:42.861425 systemd-logind[1244]: New session 7 of user core. May 8 06:46:42.862536 systemd[1]: Started session-7.scope. May 8 06:46:43.384325 sudo[1415]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 06:46:43.384839 sudo[1415]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 8 06:46:43.413976 systemd[1]: Starting coreos-metadata.service... May 8 06:46:43.938510 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 06:46:43.939567 systemd[1]: Stopped kubelet.service. May 8 06:46:43.942903 systemd[1]: Starting kubelet.service... May 8 06:46:44.209010 systemd[1]: Started kubelet.service. May 8 06:46:44.410424 kubelet[1430]: E0508 06:46:44.410309 1430 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 06:46:44.414967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 06:46:44.415360 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 06:46:50.477574 coreos-metadata[1419]: May 08 06:46:50.477 WARN failed to locate config-drive, using the metadata service API instead May 8 06:46:50.567044 coreos-metadata[1419]: May 08 06:46:50.566 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 8 06:46:50.710529 coreos-metadata[1419]: May 08 06:46:50.710 INFO Fetch successful May 8 06:46:50.710878 coreos-metadata[1419]: May 08 06:46:50.710 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 8 06:46:50.725478 coreos-metadata[1419]: May 08 06:46:50.725 INFO Fetch successful May 8 06:46:50.725741 coreos-metadata[1419]: May 08 06:46:50.725 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 8 06:46:50.738990 coreos-metadata[1419]: May 08 06:46:50.737 INFO Fetch successful May 8 06:46:50.739327 coreos-metadata[1419]: May 08 06:46:50.739 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 8 06:46:50.745397 coreos-metadata[1419]: May 08 06:46:50.745 INFO Fetch successful May 8 06:46:50.745651 coreos-metadata[1419]: May 08 06:46:50.745 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 8 06:46:50.756952 coreos-metadata[1419]: May 08 06:46:50.756 INFO Fetch successful May 8 06:46:50.776962 systemd[1]: Finished coreos-metadata.service. May 8 06:46:52.595467 systemd[1]: Stopped kubelet.service. May 8 06:46:52.601257 systemd[1]: Starting kubelet.service... May 8 06:46:52.648246 systemd[1]: Reloading. May 8 06:46:52.749894 /usr/lib/systemd/system-generators/torcx-generator[1499]: time="2025-05-08T06:46:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 06:46:52.750288 /usr/lib/systemd/system-generators/torcx-generator[1499]: time="2025-05-08T06:46:52Z" level=info msg="torcx already run" May 8 06:46:52.849372 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 06:46:52.849390 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 06:46:52.874640 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 06:46:52.951251 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 8 06:46:52.951337 systemd[1]: kubelet.service: Failed with result 'signal'. May 8 06:46:52.951568 systemd[1]: Stopped kubelet.service. May 8 06:46:52.953267 systemd[1]: Starting kubelet.service... May 8 06:46:53.037635 systemd[1]: Started kubelet.service. May 8 06:46:53.123757 kubelet[1564]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 06:46:53.124154 kubelet[1564]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 06:46:53.124229 kubelet[1564]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 06:46:53.124402 kubelet[1564]: I0508 06:46:53.124376 1564 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 06:46:53.786832 kubelet[1564]: I0508 06:46:53.786768 1564 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 8 06:46:53.786832 kubelet[1564]: I0508 06:46:53.786827 1564 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 06:46:53.787417 kubelet[1564]: I0508 06:46:53.787373 1564 server.go:927] "Client rotation is on, will bootstrap in background" May 8 06:46:53.826453 kubelet[1564]: I0508 06:46:53.826418 1564 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 06:46:53.905479 kubelet[1564]: I0508 06:46:53.905330 1564 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 06:46:53.911652 kubelet[1564]: I0508 06:46:53.911576 1564 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 06:46:53.912071 kubelet[1564]: I0508 06:46:53.911650 1564 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.62","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 8 06:46:53.913515 kubelet[1564]: I0508 06:46:53.913455 1564 topology_manager.go:138] "Creating topology manager with none policy" May 8 06:46:53.913515 kubelet[1564]: I0508 06:46:53.913509 1564 container_manager_linux.go:301] "Creating device plugin manager" May 8 06:46:53.913777 kubelet[1564]: I0508 06:46:53.913718 1564 state_mem.go:36] "Initialized new in-memory state store" May 8 06:46:53.916031 kubelet[1564]: I0508 06:46:53.915990 1564 kubelet.go:400] "Attempting to sync node with API server" May 8 06:46:53.916031 kubelet[1564]: I0508 06:46:53.916037 1564 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 06:46:53.916313 kubelet[1564]: I0508 06:46:53.916079 1564 kubelet.go:312] "Adding apiserver pod source" May 8 06:46:53.916313 kubelet[1564]: I0508 06:46:53.916147 1564 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 06:46:53.917326 kubelet[1564]: E0508 06:46:53.917253 1564 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:46:53.917589 kubelet[1564]: E0508 06:46:53.917557 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:46:53.926438 kubelet[1564]: I0508 06:46:53.926400 1564 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 8 06:46:53.930428 kubelet[1564]: I0508 06:46:53.930395 1564 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 06:46:53.930698 kubelet[1564]: W0508 06:46:53.930672 1564 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 06:46:53.932193 kubelet[1564]: I0508 06:46:53.932163 1564 server.go:1264] "Started kubelet" May 8 06:46:53.940465 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 8 06:46:53.940809 kubelet[1564]: I0508 06:46:53.940779 1564 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 06:46:53.950903 kubelet[1564]: I0508 06:46:53.950840 1564 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 06:46:53.969152 kubelet[1564]: I0508 06:46:53.967774 1564 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 06:46:53.971974 kubelet[1564]: I0508 06:46:53.971919 1564 volume_manager.go:291] "Starting Kubelet Volume Manager" May 8 06:46:53.979077 kubelet[1564]: I0508 06:46:53.978944 1564 server.go:455] "Adding debug handlers to kubelet server" May 8 06:46:53.990216 kubelet[1564]: I0508 06:46:53.990148 1564 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 06:46:53.990689 kubelet[1564]: I0508 06:46:53.990665 1564 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 06:46:53.991079 kubelet[1564]: I0508 06:46:53.991023 1564 reconciler.go:26] "Reconciler: start to sync state" May 8 06:46:54.001358 kubelet[1564]: E0508 06:46:54.001226 1564 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.62.183d7a672647a394 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.62,UID:172.24.4.62,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.24.4.62,},FirstTimestamp:2025-05-08 06:46:53.93206978 +0000 UTC m=+0.885002676,LastTimestamp:2025-05-08 06:46:53.93206978 +0000 UTC m=+0.885002676,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.62,}" May 8 06:46:54.001634 kubelet[1564]: W0508 06:46:54.001455 1564 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 8 06:46:54.001634 kubelet[1564]: E0508 06:46:54.001481 1564 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 8 06:46:54.001634 kubelet[1564]: W0508 06:46:54.001559 1564 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.62" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 8 06:46:54.001634 kubelet[1564]: E0508 06:46:54.001574 1564 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.62" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 8 06:46:54.003516 kubelet[1564]: I0508 06:46:54.003463 1564 factory.go:221] Registration of the systemd container factory successfully May 8 06:46:54.003631 kubelet[1564]: I0508 06:46:54.003552 1564 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 06:46:54.005210 kubelet[1564]: I0508 06:46:54.005192 1564 factory.go:221] Registration of the containerd container factory successfully May 8 06:46:54.015825 kubelet[1564]: E0508 06:46:54.015799 1564 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 06:46:54.027509 kubelet[1564]: I0508 06:46:54.027466 1564 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 06:46:54.027509 kubelet[1564]: I0508 06:46:54.027482 1564 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 06:46:54.027509 kubelet[1564]: I0508 06:46:54.027497 1564 state_mem.go:36] "Initialized new in-memory state store" May 8 06:46:54.033108 kubelet[1564]: E0508 06:46:54.032975 1564 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.62\" not found" node="172.24.4.62" May 8 06:46:54.040160 kubelet[1564]: I0508 06:46:54.038823 1564 policy_none.go:49] "None policy: Start" May 8 06:46:54.040743 kubelet[1564]: I0508 06:46:54.040714 1564 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 06:46:54.040743 kubelet[1564]: I0508 06:46:54.040736 1564 state_mem.go:35] "Initializing new in-memory state store" May 8 06:46:54.051394 kubelet[1564]: I0508 06:46:54.051349 1564 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 06:46:54.053827 kubelet[1564]: I0508 06:46:54.051487 1564 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 06:46:54.053889 kubelet[1564]: I0508 06:46:54.053881 1564 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 06:46:54.059404 kubelet[1564]: E0508 06:46:54.059380 1564 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.62\" not found" May 8 06:46:54.075214 kubelet[1564]: I0508 06:46:54.075190 1564 kubelet_node_status.go:73] "Attempting to register node" node="172.24.4.62" May 8 06:46:54.085178 kubelet[1564]: I0508 06:46:54.085158 1564 kubelet_node_status.go:76] "Successfully registered node" node="172.24.4.62" May 8 06:46:54.201299 kubelet[1564]: E0508 06:46:54.201225 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.62\" not found" May 8 06:46:54.241712 kubelet[1564]: I0508 06:46:54.241610 1564 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 06:46:54.246186 kubelet[1564]: I0508 06:46:54.246048 1564 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 06:46:54.246186 kubelet[1564]: I0508 06:46:54.246121 1564 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 06:46:54.246186 kubelet[1564]: I0508 06:46:54.246156 1564 kubelet.go:2337] "Starting kubelet main sync loop" May 8 06:46:54.246631 kubelet[1564]: E0508 06:46:54.246258 1564 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 8 06:46:54.303339 kubelet[1564]: E0508 06:46:54.302178 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.62\" not found" May 8 06:46:54.404346 kubelet[1564]: E0508 06:46:54.404299 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.62\" not found" May 8 06:46:54.505324 kubelet[1564]: E0508 06:46:54.505249 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.62\" not found" May 8 06:46:54.576930 sudo[1415]: pam_unix(sudo:session): session closed for user root May 8 06:46:54.606189 kubelet[1564]: E0508 06:46:54.606137 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.62\" not found" May 8 06:46:54.707161 kubelet[1564]: E0508 06:46:54.707088 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.62\" not found" May 8 06:46:54.791726 kubelet[1564]: I0508 06:46:54.791637 1564 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 8 06:46:54.792183 kubelet[1564]: W0508 06:46:54.791955 1564 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 8 06:46:54.792183 kubelet[1564]: W0508 06:46:54.792039 1564 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 8 06:46:54.808216 kubelet[1564]: E0508 06:46:54.808163 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.62\" not found" May 8 06:46:54.860075 sshd[1409]: pam_unix(sshd:session): session closed for user core May 8 06:46:54.866383 systemd[1]: sshd@6-172.24.4.62:22-172.24.4.1:39522.service: Deactivated successfully. May 8 06:46:54.868034 systemd[1]: session-7.scope: Deactivated successfully. May 8 06:46:54.870472 systemd-logind[1244]: Session 7 logged out. Waiting for processes to exit. May 8 06:46:54.872460 systemd-logind[1244]: Removed session 7. May 8 06:46:54.908632 kubelet[1564]: E0508 06:46:54.908550 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.62\" not found" May 8 06:46:54.917952 kubelet[1564]: E0508 06:46:54.917863 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:46:55.009386 kubelet[1564]: E0508 06:46:55.009315 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.62\" not found" May 8 06:46:55.109922 kubelet[1564]: E0508 06:46:55.109851 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.62\" not found" May 8 06:46:55.210924 kubelet[1564]: E0508 06:46:55.210786 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.62\" not found" May 8 06:46:55.312936 kubelet[1564]: E0508 06:46:55.312873 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.62\" not found" May 8 06:46:55.413492 kubelet[1564]: E0508 06:46:55.413428 1564 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.62\" not found" May 8 06:46:55.516152 kubelet[1564]: I0508 06:46:55.515946 1564 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 8 06:46:55.517564 env[1251]: time="2025-05-08T06:46:55.517382370Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 06:46:55.518576 kubelet[1564]: I0508 06:46:55.518542 1564 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 8 06:46:55.918727 kubelet[1564]: I0508 06:46:55.918346 1564 apiserver.go:52] "Watching apiserver" May 8 06:46:55.919234 kubelet[1564]: E0508 06:46:55.918972 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:46:55.953547 kubelet[1564]: I0508 06:46:55.953442 1564 topology_manager.go:215] "Topology Admit Handler" podUID="16df365b-13b6-42a5-8e92-f66ed6547dc7" podNamespace="kube-system" podName="cilium-b62qc" May 8 06:46:55.953765 kubelet[1564]: I0508 06:46:55.953713 1564 topology_manager.go:215] "Topology Admit Handler" podUID="a85c7665-e68f-43f7-8902-ea084028b891" podNamespace="kube-system" podName="kube-proxy-jjbxp" May 8 06:46:55.992237 kubelet[1564]: I0508 06:46:55.992189 1564 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 06:46:56.003908 kubelet[1564]: I0508 06:46:56.003812 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-cni-path\") pod \"cilium-b62qc\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " pod="kube-system/cilium-b62qc" May 8 06:46:56.003908 kubelet[1564]: I0508 06:46:56.003895 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-etc-cni-netd\") pod \"cilium-b62qc\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " pod="kube-system/cilium-b62qc" May 8 06:46:56.004246 kubelet[1564]: I0508 06:46:56.003940 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-lib-modules\") pod \"cilium-b62qc\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " pod="kube-system/cilium-b62qc" May 8 06:46:56.004246 kubelet[1564]: I0508 06:46:56.003984 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-xtables-lock\") pod \"cilium-b62qc\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " pod="kube-system/cilium-b62qc" May 8 06:46:56.004246 kubelet[1564]: I0508 06:46:56.004032 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16df365b-13b6-42a5-8e92-f66ed6547dc7-cilium-config-path\") pod \"cilium-b62qc\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " pod="kube-system/cilium-b62qc" May 8 06:46:56.004246 kubelet[1564]: I0508 06:46:56.004073 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-cilium-run\") pod \"cilium-b62qc\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " pod="kube-system/cilium-b62qc" May 8 06:46:56.004246 kubelet[1564]: I0508 06:46:56.004191 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-hostproc\") pod \"cilium-b62qc\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " pod="kube-system/cilium-b62qc" May 8 06:46:56.004567 kubelet[1564]: I0508 06:46:56.004266 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-cilium-cgroup\") pod \"cilium-b62qc\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " pod="kube-system/cilium-b62qc" May 8 06:46:56.004567 kubelet[1564]: I0508 06:46:56.004363 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16df365b-13b6-42a5-8e92-f66ed6547dc7-hubble-tls\") pod \"cilium-b62qc\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " pod="kube-system/cilium-b62qc" May 8 06:46:56.004567 kubelet[1564]: I0508 06:46:56.004403 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a85c7665-e68f-43f7-8902-ea084028b891-lib-modules\") pod \"kube-proxy-jjbxp\" (UID: \"a85c7665-e68f-43f7-8902-ea084028b891\") " pod="kube-system/kube-proxy-jjbxp" May 8 06:46:56.004567 kubelet[1564]: I0508 06:46:56.004444 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv22p\" (UniqueName: \"kubernetes.io/projected/a85c7665-e68f-43f7-8902-ea084028b891-kube-api-access-bv22p\") pod \"kube-proxy-jjbxp\" (UID: \"a85c7665-e68f-43f7-8902-ea084028b891\") " pod="kube-system/kube-proxy-jjbxp" May 8 06:46:56.004567 kubelet[1564]: I0508 06:46:56.004483 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16df365b-13b6-42a5-8e92-f66ed6547dc7-clustermesh-secrets\") pod \"cilium-b62qc\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " pod="kube-system/cilium-b62qc" May 8 06:46:56.004890 kubelet[1564]: I0508 06:46:56.004542 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-host-proc-sys-kernel\") pod \"cilium-b62qc\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " pod="kube-system/cilium-b62qc" May 8 06:46:56.004890 kubelet[1564]: I0508 06:46:56.004582 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a85c7665-e68f-43f7-8902-ea084028b891-kube-proxy\") pod \"kube-proxy-jjbxp\" (UID: \"a85c7665-e68f-43f7-8902-ea084028b891\") " pod="kube-system/kube-proxy-jjbxp" May 8 06:46:56.004890 kubelet[1564]: I0508 06:46:56.004622 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-bpf-maps\") pod \"cilium-b62qc\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " pod="kube-system/cilium-b62qc" May 8 06:46:56.004890 kubelet[1564]: I0508 06:46:56.004659 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-host-proc-sys-net\") pod \"cilium-b62qc\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " pod="kube-system/cilium-b62qc" May 8 06:46:56.004890 kubelet[1564]: I0508 06:46:56.004699 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd7ft\" (UniqueName: \"kubernetes.io/projected/16df365b-13b6-42a5-8e92-f66ed6547dc7-kube-api-access-pd7ft\") pod \"cilium-b62qc\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " pod="kube-system/cilium-b62qc" May 8 06:46:56.005263 kubelet[1564]: I0508 06:46:56.004755 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a85c7665-e68f-43f7-8902-ea084028b891-xtables-lock\") pod \"kube-proxy-jjbxp\" (UID: \"a85c7665-e68f-43f7-8902-ea084028b891\") " pod="kube-system/kube-proxy-jjbxp" May 8 06:46:56.263953 env[1251]: time="2025-05-08T06:46:56.262180382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jjbxp,Uid:a85c7665-e68f-43f7-8902-ea084028b891,Namespace:kube-system,Attempt:0,}" May 8 06:46:56.268395 env[1251]: time="2025-05-08T06:46:56.268277315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b62qc,Uid:16df365b-13b6-42a5-8e92-f66ed6547dc7,Namespace:kube-system,Attempt:0,}" May 8 06:46:56.919632 kubelet[1564]: E0508 06:46:56.919545 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:46:57.100370 env[1251]: time="2025-05-08T06:46:57.100310571Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:46:57.103222 env[1251]: time="2025-05-08T06:46:57.103167205Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:46:57.107643 env[1251]: time="2025-05-08T06:46:57.107561911Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:46:57.111686 env[1251]: time="2025-05-08T06:46:57.111581856Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:46:57.117625 env[1251]: time="2025-05-08T06:46:57.117573438Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:46:57.121980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2752877784.mount: Deactivated successfully. May 8 06:46:57.125722 env[1251]: time="2025-05-08T06:46:57.125648284Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:46:57.140713 env[1251]: time="2025-05-08T06:46:57.140622115Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:46:57.143995 env[1251]: time="2025-05-08T06:46:57.143937013Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:46:57.206653 env[1251]: time="2025-05-08T06:46:57.205590043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 06:46:57.206653 env[1251]: time="2025-05-08T06:46:57.205675279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 06:46:57.206653 env[1251]: time="2025-05-08T06:46:57.205708073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 06:46:57.207060 env[1251]: time="2025-05-08T06:46:57.206779213Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8 pid=1623 runtime=io.containerd.runc.v2 May 8 06:46:57.221791 env[1251]: time="2025-05-08T06:46:57.221720850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 06:46:57.221957 env[1251]: time="2025-05-08T06:46:57.221932733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 06:46:57.222063 env[1251]: time="2025-05-08T06:46:57.222041104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 06:46:57.222320 env[1251]: time="2025-05-08T06:46:57.222283297Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eefce7a5ecef9cc3c85a74d4216c29313484dae1b7eb60adc7105ae199903db0 pid=1622 runtime=io.containerd.runc.v2 May 8 06:46:57.267913 env[1251]: time="2025-05-08T06:46:57.267876778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b62qc,Uid:16df365b-13b6-42a5-8e92-f66ed6547dc7,Namespace:kube-system,Attempt:0,} returns sandbox id \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\"" May 8 06:46:57.270240 env[1251]: time="2025-05-08T06:46:57.270213027Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 06:46:57.283524 env[1251]: time="2025-05-08T06:46:57.283476448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jjbxp,Uid:a85c7665-e68f-43f7-8902-ea084028b891,Namespace:kube-system,Attempt:0,} returns sandbox id \"eefce7a5ecef9cc3c85a74d4216c29313484dae1b7eb60adc7105ae199903db0\"" May 8 06:46:57.920431 kubelet[1564]: E0508 06:46:57.920326 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:46:58.921078 kubelet[1564]: E0508 06:46:58.921003 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:46:59.921640 kubelet[1564]: E0508 06:46:59.921564 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:00.922521 kubelet[1564]: E0508 06:47:00.922474 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:01.922998 kubelet[1564]: E0508 06:47:01.922944 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:02.923573 kubelet[1564]: E0508 06:47:02.923481 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:03.924137 kubelet[1564]: E0508 06:47:03.924068 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:04.159158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount890087218.mount: Deactivated successfully. May 8 06:47:04.685253 update_engine[1245]: I0508 06:47:04.685137 1245 update_attempter.cc:509] Updating boot flags... May 8 06:47:04.924876 kubelet[1564]: E0508 06:47:04.924799 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:05.925772 kubelet[1564]: E0508 06:47:05.925695 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:06.925915 kubelet[1564]: E0508 06:47:06.925858 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:07.926890 kubelet[1564]: E0508 06:47:07.926803 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:08.413641 env[1251]: time="2025-05-08T06:47:08.413532057Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:47:08.417701 env[1251]: time="2025-05-08T06:47:08.417628386Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:47:08.423527 env[1251]: time="2025-05-08T06:47:08.423447329Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:47:08.424468 env[1251]: time="2025-05-08T06:47:08.424428412Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 8 06:47:08.429182 env[1251]: time="2025-05-08T06:47:08.428997808Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 8 06:47:08.431132 env[1251]: time="2025-05-08T06:47:08.430773775Z" level=info msg="CreateContainer within sandbox \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 06:47:08.459971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1068316885.mount: Deactivated successfully. May 8 06:47:08.470829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3558948472.mount: Deactivated successfully. May 8 06:47:08.596653 env[1251]: time="2025-05-08T06:47:08.596569503Z" level=info msg="CreateContainer within sandbox \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"109afe6eaf0b4ded7ffe4e8576909d015a4c800353be16e56e1aba9f49102922\"" May 8 06:47:08.599821 env[1251]: time="2025-05-08T06:47:08.599726550Z" level=info msg="StartContainer for \"109afe6eaf0b4ded7ffe4e8576909d015a4c800353be16e56e1aba9f49102922\"" May 8 06:47:08.693057 env[1251]: time="2025-05-08T06:47:08.692954477Z" level=info msg="StartContainer for \"109afe6eaf0b4ded7ffe4e8576909d015a4c800353be16e56e1aba9f49102922\" returns successfully" May 8 06:47:08.928152 kubelet[1564]: E0508 06:47:08.928052 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:09.457935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-109afe6eaf0b4ded7ffe4e8576909d015a4c800353be16e56e1aba9f49102922-rootfs.mount: Deactivated successfully. May 8 06:47:09.929249 kubelet[1564]: E0508 06:47:09.929199 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:10.205841 env[1251]: time="2025-05-08T06:47:10.205300718Z" level=info msg="shim disconnected" id=109afe6eaf0b4ded7ffe4e8576909d015a4c800353be16e56e1aba9f49102922 May 8 06:47:10.205841 env[1251]: time="2025-05-08T06:47:10.205392203Z" level=warning msg="cleaning up after shim disconnected" id=109afe6eaf0b4ded7ffe4e8576909d015a4c800353be16e56e1aba9f49102922 namespace=k8s.io May 8 06:47:10.205841 env[1251]: time="2025-05-08T06:47:10.205416480Z" level=info msg="cleaning up dead shim" May 8 06:47:10.223553 env[1251]: time="2025-05-08T06:47:10.223440509Z" level=warning msg="cleanup warnings time=\"2025-05-08T06:47:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1758 runtime=io.containerd.runc.v2\n" May 8 06:47:10.315630 env[1251]: time="2025-05-08T06:47:10.315493017Z" level=info msg="CreateContainer within sandbox \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 06:47:10.347654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1697028369.mount: Deactivated successfully. May 8 06:47:10.375840 env[1251]: time="2025-05-08T06:47:10.375741721Z" level=info msg="CreateContainer within sandbox \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c045bfb0a5a6628357858bcf06cb9745e314e5e39dfd916d279a9d14c3239318\"" May 8 06:47:10.376999 env[1251]: time="2025-05-08T06:47:10.376695076Z" level=info msg="StartContainer for \"c045bfb0a5a6628357858bcf06cb9745e314e5e39dfd916d279a9d14c3239318\"" May 8 06:47:10.462389 env[1251]: time="2025-05-08T06:47:10.461825311Z" level=info msg="StartContainer for \"c045bfb0a5a6628357858bcf06cb9745e314e5e39dfd916d279a9d14c3239318\" returns successfully" May 8 06:47:10.465205 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 06:47:10.465502 systemd[1]: Stopped systemd-sysctl.service. May 8 06:47:10.465664 systemd[1]: Stopping systemd-sysctl.service... May 8 06:47:10.469263 systemd[1]: Starting systemd-sysctl.service... May 8 06:47:10.472326 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 06:47:10.481317 systemd[1]: Finished systemd-sysctl.service. May 8 06:47:10.499270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c045bfb0a5a6628357858bcf06cb9745e314e5e39dfd916d279a9d14c3239318-rootfs.mount: Deactivated successfully. May 8 06:47:10.519510 env[1251]: time="2025-05-08T06:47:10.519460601Z" level=info msg="shim disconnected" id=c045bfb0a5a6628357858bcf06cb9745e314e5e39dfd916d279a9d14c3239318 May 8 06:47:10.519808 env[1251]: time="2025-05-08T06:47:10.519783951Z" level=warning msg="cleaning up after shim disconnected" id=c045bfb0a5a6628357858bcf06cb9745e314e5e39dfd916d279a9d14c3239318 namespace=k8s.io May 8 06:47:10.519886 env[1251]: time="2025-05-08T06:47:10.519870326Z" level=info msg="cleaning up dead shim" May 8 06:47:10.528465 env[1251]: time="2025-05-08T06:47:10.528419124Z" level=warning msg="cleanup warnings time=\"2025-05-08T06:47:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1824 runtime=io.containerd.runc.v2\n" May 8 06:47:10.930711 kubelet[1564]: E0508 06:47:10.930669 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:11.316743 env[1251]: time="2025-05-08T06:47:11.316228855Z" level=info msg="CreateContainer within sandbox \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 06:47:11.358631 env[1251]: time="2025-05-08T06:47:11.358521746Z" level=info msg="CreateContainer within sandbox \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f95608ca5243d5d6ed556292b08b01bd147271cc56a915a12865118964fc1e85\"" May 8 06:47:11.360639 env[1251]: time="2025-05-08T06:47:11.360557441Z" level=info msg="StartContainer for \"f95608ca5243d5d6ed556292b08b01bd147271cc56a915a12865118964fc1e85\"" May 8 06:47:11.446982 env[1251]: time="2025-05-08T06:47:11.446949709Z" level=info msg="StartContainer for \"f95608ca5243d5d6ed556292b08b01bd147271cc56a915a12865118964fc1e85\" returns successfully" May 8 06:47:11.470820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f95608ca5243d5d6ed556292b08b01bd147271cc56a915a12865118964fc1e85-rootfs.mount: Deactivated successfully. May 8 06:47:11.606442 env[1251]: time="2025-05-08T06:47:11.606372498Z" level=info msg="shim disconnected" id=f95608ca5243d5d6ed556292b08b01bd147271cc56a915a12865118964fc1e85 May 8 06:47:11.606442 env[1251]: time="2025-05-08T06:47:11.606426392Z" level=warning msg="cleaning up after shim disconnected" id=f95608ca5243d5d6ed556292b08b01bd147271cc56a915a12865118964fc1e85 namespace=k8s.io May 8 06:47:11.606442 env[1251]: time="2025-05-08T06:47:11.606440638Z" level=info msg="cleaning up dead shim" May 8 06:47:11.616267 env[1251]: time="2025-05-08T06:47:11.616208954Z" level=warning msg="cleanup warnings time=\"2025-05-08T06:47:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1882 runtime=io.containerd.runc.v2\n" May 8 06:47:11.770104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount217595577.mount: Deactivated successfully. May 8 06:47:11.931678 kubelet[1564]: E0508 06:47:11.931065 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:12.333929 env[1251]: time="2025-05-08T06:47:12.333850259Z" level=info msg="CreateContainer within sandbox \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 06:47:12.373899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount699935614.mount: Deactivated successfully. May 8 06:47:12.403133 env[1251]: time="2025-05-08T06:47:12.402919668Z" level=info msg="CreateContainer within sandbox \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"181c5b773e24384b0887a9144ea3769018c9b5d4afe69e088c51ffcd3ea2f144\"" May 8 06:47:12.404425 env[1251]: time="2025-05-08T06:47:12.404327960Z" level=info msg="StartContainer for \"181c5b773e24384b0887a9144ea3769018c9b5d4afe69e088c51ffcd3ea2f144\"" May 8 06:47:12.476543 env[1251]: time="2025-05-08T06:47:12.476508455Z" level=info msg="StartContainer for \"181c5b773e24384b0887a9144ea3769018c9b5d4afe69e088c51ffcd3ea2f144\" returns successfully" May 8 06:47:12.490439 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-181c5b773e24384b0887a9144ea3769018c9b5d4afe69e088c51ffcd3ea2f144-rootfs.mount: Deactivated successfully. May 8 06:47:12.743480 env[1251]: time="2025-05-08T06:47:12.742624290Z" level=info msg="shim disconnected" id=181c5b773e24384b0887a9144ea3769018c9b5d4afe69e088c51ffcd3ea2f144 May 8 06:47:12.744049 env[1251]: time="2025-05-08T06:47:12.743999668Z" level=warning msg="cleaning up after shim disconnected" id=181c5b773e24384b0887a9144ea3769018c9b5d4afe69e088c51ffcd3ea2f144 namespace=k8s.io May 8 06:47:12.744285 env[1251]: time="2025-05-08T06:47:12.744247512Z" level=info msg="cleaning up dead shim" May 8 06:47:12.766048 env[1251]: time="2025-05-08T06:47:12.765917389Z" level=warning msg="cleanup warnings time=\"2025-05-08T06:47:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1941 runtime=io.containerd.runc.v2\n" May 8 06:47:12.931884 kubelet[1564]: E0508 06:47:12.931825 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:12.950941 env[1251]: time="2025-05-08T06:47:12.950866323Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:47:12.953421 env[1251]: time="2025-05-08T06:47:12.953363816Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:47:12.955487 env[1251]: time="2025-05-08T06:47:12.955439343Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:47:12.957412 env[1251]: time="2025-05-08T06:47:12.957355555Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:47:12.958247 env[1251]: time="2025-05-08T06:47:12.958180732Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 8 06:47:12.964439 env[1251]: time="2025-05-08T06:47:12.964338432Z" level=info msg="CreateContainer within sandbox \"eefce7a5ecef9cc3c85a74d4216c29313484dae1b7eb60adc7105ae199903db0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 06:47:12.999590 env[1251]: time="2025-05-08T06:47:12.998261036Z" level=info msg="CreateContainer within sandbox \"eefce7a5ecef9cc3c85a74d4216c29313484dae1b7eb60adc7105ae199903db0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c120f32df25097c2dc3e2b9c40faf44a339e06ac810e2a447fb6cbaae70fc0a2\"" May 8 06:47:12.999980 env[1251]: time="2025-05-08T06:47:12.999905079Z" level=info msg="StartContainer for \"c120f32df25097c2dc3e2b9c40faf44a339e06ac810e2a447fb6cbaae70fc0a2\"" May 8 06:47:13.085012 env[1251]: time="2025-05-08T06:47:13.084942528Z" level=info msg="StartContainer for \"c120f32df25097c2dc3e2b9c40faf44a339e06ac810e2a447fb6cbaae70fc0a2\" returns successfully" May 8 06:47:13.341674 env[1251]: time="2025-05-08T06:47:13.339572609Z" level=info msg="CreateContainer within sandbox \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 06:47:13.380801 env[1251]: time="2025-05-08T06:47:13.380710250Z" level=info msg="CreateContainer within sandbox \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878\"" May 8 06:47:13.382550 env[1251]: time="2025-05-08T06:47:13.381835940Z" level=info msg="StartContainer for \"4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878\"" May 8 06:47:13.459180 env[1251]: time="2025-05-08T06:47:13.456552788Z" level=info msg="StartContainer for \"4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878\" returns successfully" May 8 06:47:13.460967 kubelet[1564]: I0508 06:47:13.460801 1564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jjbxp" podStartSLOduration=3.784834889 podStartE2EDuration="19.460779199s" podCreationTimestamp="2025-05-08 06:46:54 +0000 UTC" firstStartedPulling="2025-05-08 06:46:57.28480932 +0000 UTC m=+4.237742165" lastFinishedPulling="2025-05-08 06:47:12.96075359 +0000 UTC m=+19.913686475" observedRunningTime="2025-05-08 06:47:13.446211188 +0000 UTC m=+20.399144073" watchObservedRunningTime="2025-05-08 06:47:13.460779199 +0000 UTC m=+20.413712054" May 8 06:47:13.492688 systemd[1]: run-containerd-runc-k8s.io-4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878-runc.Ao6Bgw.mount: Deactivated successfully. May 8 06:47:13.598405 kubelet[1564]: I0508 06:47:13.597462 1564 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 8 06:47:13.916656 kubelet[1564]: E0508 06:47:13.916503 1564 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:13.919133 kernel: Initializing XFRM netlink socket May 8 06:47:13.934039 kubelet[1564]: E0508 06:47:13.933963 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:14.934738 kubelet[1564]: E0508 06:47:14.934667 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:15.689391 systemd-networkd[1026]: cilium_host: Link UP May 8 06:47:15.695480 systemd-networkd[1026]: cilium_net: Link UP May 8 06:47:15.696215 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 8 06:47:15.701322 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 8 06:47:15.701627 systemd-networkd[1026]: cilium_net: Gained carrier May 8 06:47:15.703082 systemd-networkd[1026]: cilium_host: Gained carrier May 8 06:47:15.760213 systemd-networkd[1026]: cilium_net: Gained IPv6LL May 8 06:47:15.826493 systemd-networkd[1026]: cilium_vxlan: Link UP May 8 06:47:15.826503 systemd-networkd[1026]: cilium_vxlan: Gained carrier May 8 06:47:15.935479 kubelet[1564]: E0508 06:47:15.935416 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:16.090373 systemd-networkd[1026]: cilium_host: Gained IPv6LL May 8 06:47:16.158252 kernel: NET: Registered PF_ALG protocol family May 8 06:47:16.935941 kubelet[1564]: E0508 06:47:16.935889 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:17.019693 systemd-networkd[1026]: lxc_health: Link UP May 8 06:47:17.028312 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 8 06:47:17.027960 systemd-networkd[1026]: lxc_health: Gained carrier May 8 06:47:17.099296 systemd-networkd[1026]: cilium_vxlan: Gained IPv6LL May 8 06:47:17.937081 kubelet[1564]: E0508 06:47:17.937003 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:18.300936 kubelet[1564]: I0508 06:47:18.300761 1564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b62qc" podStartSLOduration=13.142810667 podStartE2EDuration="24.300734599s" podCreationTimestamp="2025-05-08 06:46:54 +0000 UTC" firstStartedPulling="2025-05-08 06:46:57.269540845 +0000 UTC m=+4.222473700" lastFinishedPulling="2025-05-08 06:47:08.427464736 +0000 UTC m=+15.380397632" observedRunningTime="2025-05-08 06:47:14.404346514 +0000 UTC m=+21.357279479" watchObservedRunningTime="2025-05-08 06:47:18.300734599 +0000 UTC m=+25.253667484" May 8 06:47:18.506413 systemd-networkd[1026]: lxc_health: Gained IPv6LL May 8 06:47:18.937464 kubelet[1564]: E0508 06:47:18.937423 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:19.938485 kubelet[1564]: E0508 06:47:19.938431 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:20.498166 kubelet[1564]: I0508 06:47:20.498082 1564 topology_manager.go:215] "Topology Admit Handler" podUID="46b2c392-5852-41d8-845e-e76a6df675d8" podNamespace="default" podName="nginx-deployment-85f456d6dd-z56h9" May 8 06:47:20.588880 kubelet[1564]: I0508 06:47:20.588764 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz9v8\" (UniqueName: \"kubernetes.io/projected/46b2c392-5852-41d8-845e-e76a6df675d8-kube-api-access-sz9v8\") pod \"nginx-deployment-85f456d6dd-z56h9\" (UID: \"46b2c392-5852-41d8-845e-e76a6df675d8\") " pod="default/nginx-deployment-85f456d6dd-z56h9" May 8 06:47:20.807337 env[1251]: time="2025-05-08T06:47:20.806657343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-z56h9,Uid:46b2c392-5852-41d8-845e-e76a6df675d8,Namespace:default,Attempt:0,}" May 8 06:47:20.878331 systemd-networkd[1026]: lxca8ae09428ae4: Link UP May 8 06:47:20.904144 kernel: eth0: renamed from tmpb7dc1 May 8 06:47:20.922717 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 8 06:47:20.922815 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca8ae09428ae4: link becomes ready May 8 06:47:20.924172 systemd-networkd[1026]: lxca8ae09428ae4: Gained carrier May 8 06:47:20.939347 kubelet[1564]: E0508 06:47:20.939274 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:21.940193 kubelet[1564]: E0508 06:47:21.940153 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:22.505199 env[1251]: time="2025-05-08T06:47:22.505129874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 06:47:22.505599 env[1251]: time="2025-05-08T06:47:22.505573728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 06:47:22.505719 env[1251]: time="2025-05-08T06:47:22.505696530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 06:47:22.506127 env[1251]: time="2025-05-08T06:47:22.506063207Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7dc1cb7c3d17b9b27bc5e7d306c4232321c6a39fc65e2e8a8e3c8260c3e70b2 pid=2626 runtime=io.containerd.runc.v2 May 8 06:47:22.526424 systemd[1]: run-containerd-runc-k8s.io-b7dc1cb7c3d17b9b27bc5e7d306c4232321c6a39fc65e2e8a8e3c8260c3e70b2-runc.m0H37u.mount: Deactivated successfully. May 8 06:47:22.569783 env[1251]: time="2025-05-08T06:47:22.569732861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-z56h9,Uid:46b2c392-5852-41d8-845e-e76a6df675d8,Namespace:default,Attempt:0,} returns sandbox id \"b7dc1cb7c3d17b9b27bc5e7d306c4232321c6a39fc65e2e8a8e3c8260c3e70b2\"" May 8 06:47:22.571821 env[1251]: time="2025-05-08T06:47:22.571746567Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 8 06:47:22.730922 systemd-networkd[1026]: lxca8ae09428ae4: Gained IPv6LL May 8 06:47:22.941530 kubelet[1564]: E0508 06:47:22.941458 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:23.942858 kubelet[1564]: E0508 06:47:23.942795 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:24.943398 kubelet[1564]: E0508 06:47:24.943352 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:25.944527 kubelet[1564]: E0508 06:47:25.944467 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:26.944846 kubelet[1564]: E0508 06:47:26.944798 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:27.106951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3938947765.mount: Deactivated successfully. May 8 06:47:27.945787 kubelet[1564]: E0508 06:47:27.945716 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:28.946399 kubelet[1564]: E0508 06:47:28.946330 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:29.480656 env[1251]: time="2025-05-08T06:47:29.480549008Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:47:29.484412 env[1251]: time="2025-05-08T06:47:29.484339440Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:47:29.491963 env[1251]: time="2025-05-08T06:47:29.491898884Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:47:29.497312 env[1251]: time="2025-05-08T06:47:29.497222853Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 8 06:47:29.497620 env[1251]: time="2025-05-08T06:47:29.497467787Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:47:29.509387 env[1251]: time="2025-05-08T06:47:29.509327369Z" level=info msg="CreateContainer within sandbox \"b7dc1cb7c3d17b9b27bc5e7d306c4232321c6a39fc65e2e8a8e3c8260c3e70b2\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 8 06:47:29.539503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount131540905.mount: Deactivated successfully. May 8 06:47:29.549576 env[1251]: time="2025-05-08T06:47:29.549417365Z" level=info msg="CreateContainer within sandbox \"b7dc1cb7c3d17b9b27bc5e7d306c4232321c6a39fc65e2e8a8e3c8260c3e70b2\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"b710de7b7c492fa97b574938c8204acbd72818b86d2b5e5136e30d780cfe690b\"" May 8 06:47:29.551585 env[1251]: time="2025-05-08T06:47:29.551476697Z" level=info msg="StartContainer for \"b710de7b7c492fa97b574938c8204acbd72818b86d2b5e5136e30d780cfe690b\"" May 8 06:47:29.646670 env[1251]: time="2025-05-08T06:47:29.646630211Z" level=info msg="StartContainer for \"b710de7b7c492fa97b574938c8204acbd72818b86d2b5e5136e30d780cfe690b\" returns successfully" May 8 06:47:29.947448 kubelet[1564]: E0508 06:47:29.947280 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:30.442542 kubelet[1564]: I0508 06:47:30.442244 1564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-z56h9" podStartSLOduration=3.510483276 podStartE2EDuration="10.442210629s" podCreationTimestamp="2025-05-08 06:47:20 +0000 UTC" firstStartedPulling="2025-05-08 06:47:22.571462998 +0000 UTC m=+29.524395843" lastFinishedPulling="2025-05-08 06:47:29.503190301 +0000 UTC m=+36.456123196" observedRunningTime="2025-05-08 06:47:30.438309479 +0000 UTC m=+37.391242404" watchObservedRunningTime="2025-05-08 06:47:30.442210629 +0000 UTC m=+37.395143534" May 8 06:47:30.948559 kubelet[1564]: E0508 06:47:30.948488 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:31.949216 kubelet[1564]: E0508 06:47:31.949143 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:32.950811 kubelet[1564]: E0508 06:47:32.950677 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:33.916784 kubelet[1564]: E0508 06:47:33.916709 1564 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:33.951432 kubelet[1564]: E0508 06:47:33.951347 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:34.952485 kubelet[1564]: E0508 06:47:34.952406 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:35.952959 kubelet[1564]: E0508 06:47:35.952825 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:36.954700 kubelet[1564]: E0508 06:47:36.954637 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:37.955693 kubelet[1564]: E0508 06:47:37.955623 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:38.957458 kubelet[1564]: E0508 06:47:38.957342 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:39.958592 kubelet[1564]: E0508 06:47:39.958510 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:40.959718 kubelet[1564]: E0508 06:47:40.959656 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:41.961370 kubelet[1564]: E0508 06:47:41.961210 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:42.962431 kubelet[1564]: E0508 06:47:42.962350 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:43.964138 kubelet[1564]: E0508 06:47:43.964031 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:44.087572 kubelet[1564]: I0508 06:47:44.087497 1564 topology_manager.go:215] "Topology Admit Handler" podUID="c882bf32-22e1-4197-9b08-9ae797a9bb93" podNamespace="default" podName="nfs-server-provisioner-0" May 8 06:47:44.170381 kubelet[1564]: I0508 06:47:44.170310 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c882bf32-22e1-4197-9b08-9ae797a9bb93-data\") pod \"nfs-server-provisioner-0\" (UID: \"c882bf32-22e1-4197-9b08-9ae797a9bb93\") " pod="default/nfs-server-provisioner-0" May 8 06:47:44.171568 kubelet[1564]: I0508 06:47:44.171382 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqm9n\" (UniqueName: \"kubernetes.io/projected/c882bf32-22e1-4197-9b08-9ae797a9bb93-kube-api-access-bqm9n\") pod \"nfs-server-provisioner-0\" (UID: \"c882bf32-22e1-4197-9b08-9ae797a9bb93\") " pod="default/nfs-server-provisioner-0" May 8 06:47:44.396510 env[1251]: time="2025-05-08T06:47:44.396371505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c882bf32-22e1-4197-9b08-9ae797a9bb93,Namespace:default,Attempt:0,}" May 8 06:47:44.491982 systemd-networkd[1026]: lxc320eb0a35d6d: Link UP May 8 06:47:44.508227 kernel: eth0: renamed from tmp8cb49 May 8 06:47:44.523352 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 8 06:47:44.523472 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc320eb0a35d6d: link becomes ready May 8 06:47:44.524209 systemd-networkd[1026]: lxc320eb0a35d6d: Gained carrier May 8 06:47:44.900576 env[1251]: time="2025-05-08T06:47:44.900398519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 06:47:44.900909 env[1251]: time="2025-05-08T06:47:44.900501032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 06:47:44.900909 env[1251]: time="2025-05-08T06:47:44.900589188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 06:47:44.902228 env[1251]: time="2025-05-08T06:47:44.901363411Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8cb492cf368b0815a0db6076e4e5d0780592cde83afaa694af1894add9e11cfb pid=2751 runtime=io.containerd.runc.v2 May 8 06:47:44.965288 kubelet[1564]: E0508 06:47:44.965152 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:45.017545 env[1251]: time="2025-05-08T06:47:45.017437184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c882bf32-22e1-4197-9b08-9ae797a9bb93,Namespace:default,Attempt:0,} returns sandbox id \"8cb492cf368b0815a0db6076e4e5d0780592cde83afaa694af1894add9e11cfb\"" May 8 06:47:45.023908 env[1251]: time="2025-05-08T06:47:45.023863376Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 8 06:47:45.706939 systemd-networkd[1026]: lxc320eb0a35d6d: Gained IPv6LL May 8 06:47:45.966482 kubelet[1564]: E0508 06:47:45.966298 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:46.967708 kubelet[1564]: E0508 06:47:46.967364 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:47.968814 kubelet[1564]: E0508 06:47:47.967922 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:48.970260 kubelet[1564]: E0508 06:47:48.970025 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:49.215948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount856699067.mount: Deactivated successfully. May 8 06:47:49.970854 kubelet[1564]: E0508 06:47:49.970732 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:50.971488 kubelet[1564]: E0508 06:47:50.971312 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:51.973416 kubelet[1564]: E0508 06:47:51.972531 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:52.862588 env[1251]: time="2025-05-08T06:47:52.862207594Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:47:52.867726 env[1251]: time="2025-05-08T06:47:52.867661465Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:47:52.873577 env[1251]: time="2025-05-08T06:47:52.873512825Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:47:52.879308 env[1251]: time="2025-05-08T06:47:52.879249419Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:47:52.882156 env[1251]: time="2025-05-08T06:47:52.881489218Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 8 06:47:52.887589 env[1251]: time="2025-05-08T06:47:52.887551096Z" level=info msg="CreateContainer within sandbox \"8cb492cf368b0815a0db6076e4e5d0780592cde83afaa694af1894add9e11cfb\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 8 06:47:52.912897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount524156188.mount: Deactivated successfully. May 8 06:47:52.921346 env[1251]: time="2025-05-08T06:47:52.921258974Z" level=info msg="CreateContainer within sandbox \"8cb492cf368b0815a0db6076e4e5d0780592cde83afaa694af1894add9e11cfb\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"f7a3e48af33df7adda4ea2884727bfa10cd83c0bb9c341ebbf9ebc239495a356\"" May 8 06:47:52.922878 env[1251]: time="2025-05-08T06:47:52.922784355Z" level=info msg="StartContainer for \"f7a3e48af33df7adda4ea2884727bfa10cd83c0bb9c341ebbf9ebc239495a356\"" May 8 06:47:52.974007 kubelet[1564]: E0508 06:47:52.973919 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:53.028132 env[1251]: time="2025-05-08T06:47:53.027644484Z" level=info msg="StartContainer for \"f7a3e48af33df7adda4ea2884727bfa10cd83c0bb9c341ebbf9ebc239495a356\" returns successfully" May 8 06:47:53.586268 kubelet[1564]: I0508 06:47:53.585964 1564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.72464816 podStartE2EDuration="9.585877529s" podCreationTimestamp="2025-05-08 06:47:44 +0000 UTC" firstStartedPulling="2025-05-08 06:47:45.023133637 +0000 UTC m=+51.976066512" lastFinishedPulling="2025-05-08 06:47:52.884363016 +0000 UTC m=+59.837295881" observedRunningTime="2025-05-08 06:47:53.584704614 +0000 UTC m=+60.537637500" watchObservedRunningTime="2025-05-08 06:47:53.585877529 +0000 UTC m=+60.538810434" May 8 06:47:53.916565 kubelet[1564]: E0508 06:47:53.916256 1564 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:53.974124 kubelet[1564]: E0508 06:47:53.974050 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:54.974969 kubelet[1564]: E0508 06:47:54.974839 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:55.975842 kubelet[1564]: E0508 06:47:55.975756 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:56.977081 kubelet[1564]: E0508 06:47:56.976986 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:57.978872 kubelet[1564]: E0508 06:47:57.978758 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:58.980736 kubelet[1564]: E0508 06:47:58.980659 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:47:59.981337 kubelet[1564]: E0508 06:47:59.981238 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:00.982919 kubelet[1564]: E0508 06:48:00.982675 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:01.984347 kubelet[1564]: E0508 06:48:01.984226 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:02.985539 kubelet[1564]: E0508 06:48:02.985466 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:03.101905 kubelet[1564]: I0508 06:48:03.101470 1564 topology_manager.go:215] "Topology Admit Handler" podUID="34cd3d6e-ffd9-4a1c-97fa-bad449c07b2d" podNamespace="default" podName="test-pod-1" May 8 06:48:03.249035 kubelet[1564]: I0508 06:48:03.248748 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d7ca530f-c9a7-4ecc-a2a4-f0826faeeab8\" (UniqueName: \"kubernetes.io/nfs/34cd3d6e-ffd9-4a1c-97fa-bad449c07b2d-pvc-d7ca530f-c9a7-4ecc-a2a4-f0826faeeab8\") pod \"test-pod-1\" (UID: \"34cd3d6e-ffd9-4a1c-97fa-bad449c07b2d\") " pod="default/test-pod-1" May 8 06:48:03.249492 kubelet[1564]: I0508 06:48:03.248964 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v9b8\" (UniqueName: \"kubernetes.io/projected/34cd3d6e-ffd9-4a1c-97fa-bad449c07b2d-kube-api-access-9v9b8\") pod \"test-pod-1\" (UID: \"34cd3d6e-ffd9-4a1c-97fa-bad449c07b2d\") " pod="default/test-pod-1" May 8 06:48:03.449169 kernel: FS-Cache: Loaded May 8 06:48:03.525019 kernel: RPC: Registered named UNIX socket transport module. May 8 06:48:03.525421 kernel: RPC: Registered udp transport module. May 8 06:48:03.525534 kernel: RPC: Registered tcp transport module. May 8 06:48:03.525906 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 8 06:48:03.611270 kernel: FS-Cache: Netfs 'nfs' registered for caching May 8 06:48:03.837986 kernel: NFS: Registering the id_resolver key type May 8 06:48:03.838474 kernel: Key type id_resolver registered May 8 06:48:03.838617 kernel: Key type id_legacy registered May 8 06:48:03.906718 nfsidmap[2872]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' May 8 06:48:03.915616 nfsidmap[2873]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' May 8 06:48:03.986593 kubelet[1564]: E0508 06:48:03.986428 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:04.014706 env[1251]: time="2025-05-08T06:48:04.013205690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:34cd3d6e-ffd9-4a1c-97fa-bad449c07b2d,Namespace:default,Attempt:0,}" May 8 06:48:04.135226 systemd-networkd[1026]: lxc895c4a1858c4: Link UP May 8 06:48:04.143156 kernel: eth0: renamed from tmp51390 May 8 06:48:04.151384 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 8 06:48:04.151518 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc895c4a1858c4: link becomes ready May 8 06:48:04.151713 systemd-networkd[1026]: lxc895c4a1858c4: Gained carrier May 8 06:48:04.445422 env[1251]: time="2025-05-08T06:48:04.444505591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 06:48:04.445422 env[1251]: time="2025-05-08T06:48:04.444662539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 06:48:04.445841 env[1251]: time="2025-05-08T06:48:04.444704378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 06:48:04.446042 env[1251]: time="2025-05-08T06:48:04.445992797Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/51390dc55ce9452ce6169b9eec3fef8cfd213290f2ca77a6402f4f0de3e08aaf pid=2900 runtime=io.containerd.runc.v2 May 8 06:48:04.535744 env[1251]: time="2025-05-08T06:48:04.535671736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:34cd3d6e-ffd9-4a1c-97fa-bad449c07b2d,Namespace:default,Attempt:0,} returns sandbox id \"51390dc55ce9452ce6169b9eec3fef8cfd213290f2ca77a6402f4f0de3e08aaf\"" May 8 06:48:04.539393 env[1251]: time="2025-05-08T06:48:04.539359262Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 8 06:48:04.986824 kubelet[1564]: E0508 06:48:04.986719 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:05.046279 env[1251]: time="2025-05-08T06:48:05.046155347Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:48:05.050783 env[1251]: time="2025-05-08T06:48:05.050699149Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:48:05.056160 env[1251]: time="2025-05-08T06:48:05.056041077Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:48:05.064857 env[1251]: time="2025-05-08T06:48:05.064793905Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:48:05.067637 env[1251]: time="2025-05-08T06:48:05.067522017Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 8 06:48:05.077191 env[1251]: time="2025-05-08T06:48:05.077027355Z" level=info msg="CreateContainer within sandbox \"51390dc55ce9452ce6169b9eec3fef8cfd213290f2ca77a6402f4f0de3e08aaf\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 8 06:48:05.127315 env[1251]: time="2025-05-08T06:48:05.127060495Z" level=info msg="CreateContainer within sandbox \"51390dc55ce9452ce6169b9eec3fef8cfd213290f2ca77a6402f4f0de3e08aaf\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"b5381bce4762db9df031dafd13728accbe6cb6991c9d04cc833f4dec45e1785b\"" May 8 06:48:05.128939 env[1251]: time="2025-05-08T06:48:05.128558450Z" level=info msg="StartContainer for \"b5381bce4762db9df031dafd13728accbe6cb6991c9d04cc833f4dec45e1785b\"" May 8 06:48:05.215370 env[1251]: time="2025-05-08T06:48:05.215283575Z" level=info msg="StartContainer for \"b5381bce4762db9df031dafd13728accbe6cb6991c9d04cc833f4dec45e1785b\" returns successfully" May 8 06:48:05.867735 systemd-networkd[1026]: lxc895c4a1858c4: Gained IPv6LL May 8 06:48:05.987152 kubelet[1564]: E0508 06:48:05.987022 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:06.988371 kubelet[1564]: E0508 06:48:06.988297 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:07.989671 kubelet[1564]: E0508 06:48:07.989589 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:08.989980 kubelet[1564]: E0508 06:48:08.989844 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:09.990948 kubelet[1564]: E0508 06:48:09.990849 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:10.991645 kubelet[1564]: E0508 06:48:10.991549 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:11.992268 kubelet[1564]: E0508 06:48:11.992191 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:12.994221 kubelet[1564]: E0508 06:48:12.994138 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:13.877752 kubelet[1564]: I0508 06:48:13.877598 1564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=27.344084952 podStartE2EDuration="27.877537436s" podCreationTimestamp="2025-05-08 06:47:46 +0000 UTC" firstStartedPulling="2025-05-08 06:48:04.538316531 +0000 UTC m=+71.491249376" lastFinishedPulling="2025-05-08 06:48:05.071768965 +0000 UTC m=+72.024701860" observedRunningTime="2025-05-08 06:48:05.636082991 +0000 UTC m=+72.589015956" watchObservedRunningTime="2025-05-08 06:48:13.877537436 +0000 UTC m=+80.830470331" May 8 06:48:13.921684 kubelet[1564]: E0508 06:48:13.921447 1564 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:13.937307 systemd[1]: run-containerd-runc-k8s.io-4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878-runc.WlEpeY.mount: Deactivated successfully. May 8 06:48:13.967831 env[1251]: time="2025-05-08T06:48:13.967746361Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 06:48:13.972679 env[1251]: time="2025-05-08T06:48:13.972638623Z" level=info msg="StopContainer for \"4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878\" with timeout 2 (s)" May 8 06:48:13.972985 env[1251]: time="2025-05-08T06:48:13.972958581Z" level=info msg="Stop container \"4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878\" with signal terminated" May 8 06:48:13.987318 systemd-networkd[1026]: lxc_health: Link DOWN May 8 06:48:13.987329 systemd-networkd[1026]: lxc_health: Lost carrier May 8 06:48:13.995941 kubelet[1564]: E0508 06:48:13.995888 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:14.040036 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878-rootfs.mount: Deactivated successfully. May 8 06:48:14.094161 kubelet[1564]: E0508 06:48:14.093991 1564 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 06:48:14.332059 env[1251]: time="2025-05-08T06:48:14.331846155Z" level=error msg="collecting metrics for 4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878" error="cgroups: cgroup deleted: unknown" May 8 06:48:14.767450 env[1251]: time="2025-05-08T06:48:14.767217567Z" level=info msg="shim disconnected" id=4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878 May 8 06:48:14.767957 env[1251]: time="2025-05-08T06:48:14.767878100Z" level=warning msg="cleaning up after shim disconnected" id=4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878 namespace=k8s.io May 8 06:48:14.768349 env[1251]: time="2025-05-08T06:48:14.768304989Z" level=info msg="cleaning up dead shim" May 8 06:48:14.784974 env[1251]: time="2025-05-08T06:48:14.784868790Z" level=warning msg="cleanup warnings time=\"2025-05-08T06:48:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3032 runtime=io.containerd.runc.v2\n" May 8 06:48:14.853069 env[1251]: time="2025-05-08T06:48:14.852963471Z" level=info msg="StopContainer for \"4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878\" returns successfully" May 8 06:48:14.854373 env[1251]: time="2025-05-08T06:48:14.854309424Z" level=info msg="StopPodSandbox for \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\"" May 8 06:48:14.854902 env[1251]: time="2025-05-08T06:48:14.854844799Z" level=info msg="Container to stop \"181c5b773e24384b0887a9144ea3769018c9b5d4afe69e088c51ffcd3ea2f144\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 06:48:14.855193 env[1251]: time="2025-05-08T06:48:14.855137525Z" level=info msg="Container to stop \"4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 06:48:14.855419 env[1251]: time="2025-05-08T06:48:14.855369425Z" level=info msg="Container to stop \"109afe6eaf0b4ded7ffe4e8576909d015a4c800353be16e56e1aba9f49102922\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 06:48:14.855669 env[1251]: time="2025-05-08T06:48:14.855618307Z" level=info msg="Container to stop \"c045bfb0a5a6628357858bcf06cb9745e314e5e39dfd916d279a9d14c3239318\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 06:48:14.855878 env[1251]: time="2025-05-08T06:48:14.855830851Z" level=info msg="Container to stop \"f95608ca5243d5d6ed556292b08b01bd147271cc56a915a12865118964fc1e85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 06:48:14.861211 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8-shm.mount: Deactivated successfully. May 8 06:48:14.920660 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8-rootfs.mount: Deactivated successfully. May 8 06:48:14.970273 env[1251]: time="2025-05-08T06:48:14.970155918Z" level=info msg="shim disconnected" id=0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8 May 8 06:48:14.970273 env[1251]: time="2025-05-08T06:48:14.970251910Z" level=warning msg="cleaning up after shim disconnected" id=0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8 namespace=k8s.io May 8 06:48:14.970273 env[1251]: time="2025-05-08T06:48:14.970271798Z" level=info msg="cleaning up dead shim" May 8 06:48:14.985000 env[1251]: time="2025-05-08T06:48:14.984911849Z" level=warning msg="cleanup warnings time=\"2025-05-08T06:48:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3066 runtime=io.containerd.runc.v2\n" May 8 06:48:14.986085 env[1251]: time="2025-05-08T06:48:14.986021835Z" level=info msg="TearDown network for sandbox \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\" successfully" May 8 06:48:14.986372 env[1251]: time="2025-05-08T06:48:14.986316233Z" level=info msg="StopPodSandbox for \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\" returns successfully" May 8 06:48:14.996492 kubelet[1564]: E0508 06:48:14.996423 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:15.156484 kubelet[1564]: I0508 06:48:15.156382 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16df365b-13b6-42a5-8e92-f66ed6547dc7-cilium-config-path\") pod \"16df365b-13b6-42a5-8e92-f66ed6547dc7\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " May 8 06:48:15.156484 kubelet[1564]: I0508 06:48:15.156493 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-cilium-run\") pod \"16df365b-13b6-42a5-8e92-f66ed6547dc7\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " May 8 06:48:15.156926 kubelet[1564]: I0508 06:48:15.156549 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-hostproc\") pod \"16df365b-13b6-42a5-8e92-f66ed6547dc7\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " May 8 06:48:15.156926 kubelet[1564]: I0508 06:48:15.156597 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-etc-cni-netd\") pod \"16df365b-13b6-42a5-8e92-f66ed6547dc7\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " May 8 06:48:15.156926 kubelet[1564]: I0508 06:48:15.156661 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-cilium-cgroup\") pod \"16df365b-13b6-42a5-8e92-f66ed6547dc7\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " May 8 06:48:15.156926 kubelet[1564]: I0508 06:48:15.156701 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-bpf-maps\") pod \"16df365b-13b6-42a5-8e92-f66ed6547dc7\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " May 8 06:48:15.156926 kubelet[1564]: I0508 06:48:15.156739 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-host-proc-sys-net\") pod \"16df365b-13b6-42a5-8e92-f66ed6547dc7\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " May 8 06:48:15.156926 kubelet[1564]: I0508 06:48:15.156782 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-lib-modules\") pod \"16df365b-13b6-42a5-8e92-f66ed6547dc7\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " May 8 06:48:15.157806 kubelet[1564]: I0508 06:48:15.156821 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-host-proc-sys-kernel\") pod \"16df365b-13b6-42a5-8e92-f66ed6547dc7\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " May 8 06:48:15.157806 kubelet[1564]: I0508 06:48:15.156860 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-cni-path\") pod \"16df365b-13b6-42a5-8e92-f66ed6547dc7\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " May 8 06:48:15.157806 kubelet[1564]: I0508 06:48:15.156897 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-xtables-lock\") pod \"16df365b-13b6-42a5-8e92-f66ed6547dc7\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " May 8 06:48:15.157806 kubelet[1564]: I0508 06:48:15.157021 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16df365b-13b6-42a5-8e92-f66ed6547dc7-hubble-tls\") pod \"16df365b-13b6-42a5-8e92-f66ed6547dc7\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " May 8 06:48:15.157806 kubelet[1564]: I0508 06:48:15.157089 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16df365b-13b6-42a5-8e92-f66ed6547dc7-clustermesh-secrets\") pod \"16df365b-13b6-42a5-8e92-f66ed6547dc7\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " May 8 06:48:15.157806 kubelet[1564]: I0508 06:48:15.157187 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd7ft\" (UniqueName: \"kubernetes.io/projected/16df365b-13b6-42a5-8e92-f66ed6547dc7-kube-api-access-pd7ft\") pod \"16df365b-13b6-42a5-8e92-f66ed6547dc7\" (UID: \"16df365b-13b6-42a5-8e92-f66ed6547dc7\") " May 8 06:48:15.158711 kubelet[1564]: I0508 06:48:15.158622 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "16df365b-13b6-42a5-8e92-f66ed6547dc7" (UID: "16df365b-13b6-42a5-8e92-f66ed6547dc7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 06:48:15.160141 kubelet[1564]: I0508 06:48:15.159060 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "16df365b-13b6-42a5-8e92-f66ed6547dc7" (UID: "16df365b-13b6-42a5-8e92-f66ed6547dc7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 06:48:15.160388 kubelet[1564]: I0508 06:48:15.159171 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "16df365b-13b6-42a5-8e92-f66ed6547dc7" (UID: "16df365b-13b6-42a5-8e92-f66ed6547dc7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 06:48:15.160633 kubelet[1564]: I0508 06:48:15.159219 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-cni-path" (OuterVolumeSpecName: "cni-path") pod "16df365b-13b6-42a5-8e92-f66ed6547dc7" (UID: "16df365b-13b6-42a5-8e92-f66ed6547dc7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 06:48:15.160908 kubelet[1564]: I0508 06:48:15.159257 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "16df365b-13b6-42a5-8e92-f66ed6547dc7" (UID: "16df365b-13b6-42a5-8e92-f66ed6547dc7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 06:48:15.175648 kubelet[1564]: I0508 06:48:15.164667 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "16df365b-13b6-42a5-8e92-f66ed6547dc7" (UID: "16df365b-13b6-42a5-8e92-f66ed6547dc7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 06:48:15.175648 kubelet[1564]: I0508 06:48:15.164829 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-hostproc" (OuterVolumeSpecName: "hostproc") pod "16df365b-13b6-42a5-8e92-f66ed6547dc7" (UID: "16df365b-13b6-42a5-8e92-f66ed6547dc7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 06:48:15.175648 kubelet[1564]: I0508 06:48:15.164930 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "16df365b-13b6-42a5-8e92-f66ed6547dc7" (UID: "16df365b-13b6-42a5-8e92-f66ed6547dc7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 06:48:15.175648 kubelet[1564]: I0508 06:48:15.165023 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "16df365b-13b6-42a5-8e92-f66ed6547dc7" (UID: "16df365b-13b6-42a5-8e92-f66ed6547dc7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 06:48:15.175648 kubelet[1564]: I0508 06:48:15.165399 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "16df365b-13b6-42a5-8e92-f66ed6547dc7" (UID: "16df365b-13b6-42a5-8e92-f66ed6547dc7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 06:48:15.174225 systemd[1]: var-lib-kubelet-pods-16df365b\x2d13b6\x2d42a5\x2d8e92\x2df66ed6547dc7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpd7ft.mount: Deactivated successfully. May 8 06:48:15.176941 kubelet[1564]: I0508 06:48:15.165746 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16df365b-13b6-42a5-8e92-f66ed6547dc7-kube-api-access-pd7ft" (OuterVolumeSpecName: "kube-api-access-pd7ft") pod "16df365b-13b6-42a5-8e92-f66ed6547dc7" (UID: "16df365b-13b6-42a5-8e92-f66ed6547dc7"). InnerVolumeSpecName "kube-api-access-pd7ft". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 06:48:15.191160 kubelet[1564]: I0508 06:48:15.187513 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16df365b-13b6-42a5-8e92-f66ed6547dc7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "16df365b-13b6-42a5-8e92-f66ed6547dc7" (UID: "16df365b-13b6-42a5-8e92-f66ed6547dc7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 06:48:15.191160 kubelet[1564]: I0508 06:48:15.187876 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16df365b-13b6-42a5-8e92-f66ed6547dc7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "16df365b-13b6-42a5-8e92-f66ed6547dc7" (UID: "16df365b-13b6-42a5-8e92-f66ed6547dc7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 06:48:15.191160 kubelet[1564]: I0508 06:48:15.188693 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16df365b-13b6-42a5-8e92-f66ed6547dc7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "16df365b-13b6-42a5-8e92-f66ed6547dc7" (UID: "16df365b-13b6-42a5-8e92-f66ed6547dc7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 06:48:15.191552 systemd[1]: var-lib-kubelet-pods-16df365b\x2d13b6\x2d42a5\x2d8e92\x2df66ed6547dc7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 06:48:15.198501 systemd[1]: var-lib-kubelet-pods-16df365b\x2d13b6\x2d42a5\x2d8e92\x2df66ed6547dc7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 06:48:15.258516 kubelet[1564]: I0508 06:48:15.258318 1564 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-cilium-cgroup\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:15.258516 kubelet[1564]: I0508 06:48:15.258436 1564 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-bpf-maps\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:15.258516 kubelet[1564]: I0508 06:48:15.258462 1564 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-host-proc-sys-net\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:15.258516 kubelet[1564]: I0508 06:48:15.258492 1564 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-etc-cni-netd\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:15.258516 kubelet[1564]: I0508 06:48:15.258514 1564 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-lib-modules\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:15.259278 kubelet[1564]: I0508 06:48:15.258537 1564 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-host-proc-sys-kernel\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:15.259278 kubelet[1564]: I0508 06:48:15.258561 1564 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16df365b-13b6-42a5-8e92-f66ed6547dc7-hubble-tls\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:15.259278 kubelet[1564]: I0508 06:48:15.258584 1564 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16df365b-13b6-42a5-8e92-f66ed6547dc7-clustermesh-secrets\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:15.259278 kubelet[1564]: I0508 06:48:15.258606 1564 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pd7ft\" (UniqueName: \"kubernetes.io/projected/16df365b-13b6-42a5-8e92-f66ed6547dc7-kube-api-access-pd7ft\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:15.259278 kubelet[1564]: I0508 06:48:15.258627 1564 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-cni-path\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:15.259278 kubelet[1564]: I0508 06:48:15.258651 1564 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-xtables-lock\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:15.259278 kubelet[1564]: I0508 06:48:15.258672 1564 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-hostproc\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:15.259278 kubelet[1564]: I0508 06:48:15.258710 1564 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16df365b-13b6-42a5-8e92-f66ed6547dc7-cilium-config-path\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:15.259967 kubelet[1564]: I0508 06:48:15.258737 1564 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16df365b-13b6-42a5-8e92-f66ed6547dc7-cilium-run\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:15.654881 kubelet[1564]: I0508 06:48:15.654826 1564 scope.go:117] "RemoveContainer" containerID="4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878" May 8 06:48:15.659624 env[1251]: time="2025-05-08T06:48:15.659533088Z" level=info msg="RemoveContainer for \"4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878\"" May 8 06:48:15.668170 env[1251]: time="2025-05-08T06:48:15.667747248Z" level=info msg="RemoveContainer for \"4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878\" returns successfully" May 8 06:48:15.668525 kubelet[1564]: I0508 06:48:15.668463 1564 scope.go:117] "RemoveContainer" containerID="181c5b773e24384b0887a9144ea3769018c9b5d4afe69e088c51ffcd3ea2f144" May 8 06:48:15.674060 env[1251]: time="2025-05-08T06:48:15.673998274Z" level=info msg="RemoveContainer for \"181c5b773e24384b0887a9144ea3769018c9b5d4afe69e088c51ffcd3ea2f144\"" May 8 06:48:15.681599 env[1251]: time="2025-05-08T06:48:15.681470527Z" level=info msg="RemoveContainer for \"181c5b773e24384b0887a9144ea3769018c9b5d4afe69e088c51ffcd3ea2f144\" returns successfully" May 8 06:48:15.682739 kubelet[1564]: I0508 06:48:15.682451 1564 scope.go:117] "RemoveContainer" containerID="f95608ca5243d5d6ed556292b08b01bd147271cc56a915a12865118964fc1e85" May 8 06:48:15.689465 env[1251]: time="2025-05-08T06:48:15.689297552Z" level=info msg="RemoveContainer for \"f95608ca5243d5d6ed556292b08b01bd147271cc56a915a12865118964fc1e85\"" May 8 06:48:15.697581 env[1251]: time="2025-05-08T06:48:15.697452119Z" level=info msg="RemoveContainer for \"f95608ca5243d5d6ed556292b08b01bd147271cc56a915a12865118964fc1e85\" returns successfully" May 8 06:48:15.698180 kubelet[1564]: I0508 06:48:15.698123 1564 scope.go:117] "RemoveContainer" containerID="c045bfb0a5a6628357858bcf06cb9745e314e5e39dfd916d279a9d14c3239318" May 8 06:48:15.701192 env[1251]: time="2025-05-08T06:48:15.701089908Z" level=info msg="RemoveContainer for \"c045bfb0a5a6628357858bcf06cb9745e314e5e39dfd916d279a9d14c3239318\"" May 8 06:48:15.707876 env[1251]: time="2025-05-08T06:48:15.707816436Z" level=info msg="RemoveContainer for \"c045bfb0a5a6628357858bcf06cb9745e314e5e39dfd916d279a9d14c3239318\" returns successfully" May 8 06:48:15.708549 kubelet[1564]: I0508 06:48:15.708480 1564 scope.go:117] "RemoveContainer" containerID="109afe6eaf0b4ded7ffe4e8576909d015a4c800353be16e56e1aba9f49102922" May 8 06:48:15.711780 env[1251]: time="2025-05-08T06:48:15.711728246Z" level=info msg="RemoveContainer for \"109afe6eaf0b4ded7ffe4e8576909d015a4c800353be16e56e1aba9f49102922\"" May 8 06:48:15.717774 env[1251]: time="2025-05-08T06:48:15.717712747Z" level=info msg="RemoveContainer for \"109afe6eaf0b4ded7ffe4e8576909d015a4c800353be16e56e1aba9f49102922\" returns successfully" May 8 06:48:15.718396 kubelet[1564]: I0508 06:48:15.718296 1564 scope.go:117] "RemoveContainer" containerID="4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878" May 8 06:48:15.718992 env[1251]: time="2025-05-08T06:48:15.718731659Z" level=error msg="ContainerStatus for \"4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878\": not found" May 8 06:48:15.720003 kubelet[1564]: E0508 06:48:15.719444 1564 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878\": not found" containerID="4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878" May 8 06:48:15.720003 kubelet[1564]: I0508 06:48:15.719564 1564 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878"} err="failed to get container status \"4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d113e1428e2c8747e8a7f7deefe18de481f379037cf3a3d75b4a616db9c3878\": not found" May 8 06:48:15.720003 kubelet[1564]: I0508 06:48:15.719854 1564 scope.go:117] "RemoveContainer" containerID="181c5b773e24384b0887a9144ea3769018c9b5d4afe69e088c51ffcd3ea2f144" May 8 06:48:15.721138 env[1251]: time="2025-05-08T06:48:15.720983440Z" level=error msg="ContainerStatus for \"181c5b773e24384b0887a9144ea3769018c9b5d4afe69e088c51ffcd3ea2f144\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"181c5b773e24384b0887a9144ea3769018c9b5d4afe69e088c51ffcd3ea2f144\": not found" May 8 06:48:15.721657 kubelet[1564]: E0508 06:48:15.721578 1564 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"181c5b773e24384b0887a9144ea3769018c9b5d4afe69e088c51ffcd3ea2f144\": not found" containerID="181c5b773e24384b0887a9144ea3769018c9b5d4afe69e088c51ffcd3ea2f144" May 8 06:48:15.722059 kubelet[1564]: I0508 06:48:15.721895 1564 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"181c5b773e24384b0887a9144ea3769018c9b5d4afe69e088c51ffcd3ea2f144"} err="failed to get container status \"181c5b773e24384b0887a9144ea3769018c9b5d4afe69e088c51ffcd3ea2f144\": rpc error: code = NotFound desc = an error occurred when try to find container \"181c5b773e24384b0887a9144ea3769018c9b5d4afe69e088c51ffcd3ea2f144\": not found" May 8 06:48:15.722059 kubelet[1564]: I0508 06:48:15.721988 1564 scope.go:117] "RemoveContainer" containerID="f95608ca5243d5d6ed556292b08b01bd147271cc56a915a12865118964fc1e85" May 8 06:48:15.723018 env[1251]: time="2025-05-08T06:48:15.722884505Z" level=error msg="ContainerStatus for \"f95608ca5243d5d6ed556292b08b01bd147271cc56a915a12865118964fc1e85\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f95608ca5243d5d6ed556292b08b01bd147271cc56a915a12865118964fc1e85\": not found" May 8 06:48:15.723587 kubelet[1564]: E0508 06:48:15.723426 1564 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f95608ca5243d5d6ed556292b08b01bd147271cc56a915a12865118964fc1e85\": not found" containerID="f95608ca5243d5d6ed556292b08b01bd147271cc56a915a12865118964fc1e85" May 8 06:48:15.723587 kubelet[1564]: I0508 06:48:15.723522 1564 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f95608ca5243d5d6ed556292b08b01bd147271cc56a915a12865118964fc1e85"} err="failed to get container status \"f95608ca5243d5d6ed556292b08b01bd147271cc56a915a12865118964fc1e85\": rpc error: code = NotFound desc = an error occurred when try to find container \"f95608ca5243d5d6ed556292b08b01bd147271cc56a915a12865118964fc1e85\": not found" May 8 06:48:15.724025 kubelet[1564]: I0508 06:48:15.723848 1564 scope.go:117] "RemoveContainer" containerID="c045bfb0a5a6628357858bcf06cb9745e314e5e39dfd916d279a9d14c3239318" May 8 06:48:15.724661 env[1251]: time="2025-05-08T06:48:15.724529064Z" level=error msg="ContainerStatus for \"c045bfb0a5a6628357858bcf06cb9745e314e5e39dfd916d279a9d14c3239318\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c045bfb0a5a6628357858bcf06cb9745e314e5e39dfd916d279a9d14c3239318\": not found" May 8 06:48:15.724928 kubelet[1564]: E0508 06:48:15.724847 1564 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c045bfb0a5a6628357858bcf06cb9745e314e5e39dfd916d279a9d14c3239318\": not found" containerID="c045bfb0a5a6628357858bcf06cb9745e314e5e39dfd916d279a9d14c3239318" May 8 06:48:15.725063 kubelet[1564]: I0508 06:48:15.724914 1564 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c045bfb0a5a6628357858bcf06cb9745e314e5e39dfd916d279a9d14c3239318"} err="failed to get container status \"c045bfb0a5a6628357858bcf06cb9745e314e5e39dfd916d279a9d14c3239318\": rpc error: code = NotFound desc = an error occurred when try to find container \"c045bfb0a5a6628357858bcf06cb9745e314e5e39dfd916d279a9d14c3239318\": not found" May 8 06:48:15.725063 kubelet[1564]: I0508 06:48:15.724960 1564 scope.go:117] "RemoveContainer" containerID="109afe6eaf0b4ded7ffe4e8576909d015a4c800353be16e56e1aba9f49102922" May 8 06:48:15.725460 env[1251]: time="2025-05-08T06:48:15.725350423Z" level=error msg="ContainerStatus for \"109afe6eaf0b4ded7ffe4e8576909d015a4c800353be16e56e1aba9f49102922\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"109afe6eaf0b4ded7ffe4e8576909d015a4c800353be16e56e1aba9f49102922\": not found" May 8 06:48:15.726088 kubelet[1564]: E0508 06:48:15.725904 1564 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"109afe6eaf0b4ded7ffe4e8576909d015a4c800353be16e56e1aba9f49102922\": not found" containerID="109afe6eaf0b4ded7ffe4e8576909d015a4c800353be16e56e1aba9f49102922" May 8 06:48:15.726088 kubelet[1564]: I0508 06:48:15.726017 1564 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"109afe6eaf0b4ded7ffe4e8576909d015a4c800353be16e56e1aba9f49102922"} err="failed to get container status \"109afe6eaf0b4ded7ffe4e8576909d015a4c800353be16e56e1aba9f49102922\": rpc error: code = NotFound desc = an error occurred when try to find container \"109afe6eaf0b4ded7ffe4e8576909d015a4c800353be16e56e1aba9f49102922\": not found" May 8 06:48:15.929203 kubelet[1564]: I0508 06:48:15.927087 1564 setters.go:580] "Node became not ready" node="172.24.4.62" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T06:48:15Z","lastTransitionTime":"2025-05-08T06:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 06:48:15.997429 kubelet[1564]: E0508 06:48:15.997350 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:16.252233 kubelet[1564]: I0508 06:48:16.252032 1564 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16df365b-13b6-42a5-8e92-f66ed6547dc7" path="/var/lib/kubelet/pods/16df365b-13b6-42a5-8e92-f66ed6547dc7/volumes" May 8 06:48:16.999495 kubelet[1564]: E0508 06:48:16.999246 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:17.999686 kubelet[1564]: E0508 06:48:17.999624 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:19.000918 kubelet[1564]: E0508 06:48:19.000844 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:19.096405 kubelet[1564]: E0508 06:48:19.096234 1564 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 06:48:19.125363 kubelet[1564]: I0508 06:48:19.125207 1564 topology_manager.go:215] "Topology Admit Handler" podUID="2fe43c3e-e572-446e-a9a1-0917aa155240" podNamespace="kube-system" podName="cilium-operator-599987898-9d8lr" May 8 06:48:19.125769 kubelet[1564]: E0508 06:48:19.125416 1564 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16df365b-13b6-42a5-8e92-f66ed6547dc7" containerName="mount-cgroup" May 8 06:48:19.125769 kubelet[1564]: E0508 06:48:19.125454 1564 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16df365b-13b6-42a5-8e92-f66ed6547dc7" containerName="mount-bpf-fs" May 8 06:48:19.125769 kubelet[1564]: E0508 06:48:19.125471 1564 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16df365b-13b6-42a5-8e92-f66ed6547dc7" containerName="cilium-agent" May 8 06:48:19.125769 kubelet[1564]: E0508 06:48:19.125487 1564 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16df365b-13b6-42a5-8e92-f66ed6547dc7" containerName="apply-sysctl-overwrites" May 8 06:48:19.125769 kubelet[1564]: E0508 06:48:19.125504 1564 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16df365b-13b6-42a5-8e92-f66ed6547dc7" containerName="clean-cilium-state" May 8 06:48:19.125769 kubelet[1564]: I0508 06:48:19.125610 1564 memory_manager.go:354] "RemoveStaleState removing state" podUID="16df365b-13b6-42a5-8e92-f66ed6547dc7" containerName="cilium-agent" May 8 06:48:19.140306 kubelet[1564]: W0508 06:48:19.140227 1564 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.24.4.62" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.62' and this object May 8 06:48:19.140306 kubelet[1564]: E0508 06:48:19.140312 1564 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.24.4.62" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.62' and this object May 8 06:48:19.147184 kubelet[1564]: I0508 06:48:19.147071 1564 topology_manager.go:215] "Topology Admit Handler" podUID="b108c8eb-04c7-4b59-b196-d0b375e5fb44" podNamespace="kube-system" podName="cilium-lmkrc" May 8 06:48:19.185254 kubelet[1564]: I0508 06:48:19.185018 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fe43c3e-e572-446e-a9a1-0917aa155240-cilium-config-path\") pod \"cilium-operator-599987898-9d8lr\" (UID: \"2fe43c3e-e572-446e-a9a1-0917aa155240\") " pod="kube-system/cilium-operator-599987898-9d8lr" May 8 06:48:19.185526 kubelet[1564]: I0508 06:48:19.185276 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9l2g\" (UniqueName: \"kubernetes.io/projected/2fe43c3e-e572-446e-a9a1-0917aa155240-kube-api-access-j9l2g\") pod \"cilium-operator-599987898-9d8lr\" (UID: \"2fe43c3e-e572-446e-a9a1-0917aa155240\") " pod="kube-system/cilium-operator-599987898-9d8lr" May 8 06:48:19.287261 kubelet[1564]: I0508 06:48:19.286394 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-host-proc-sys-net\") pod \"cilium-lmkrc\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " pod="kube-system/cilium-lmkrc" May 8 06:48:19.287863 kubelet[1564]: I0508 06:48:19.287731 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cni-path\") pod \"cilium-lmkrc\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " pod="kube-system/cilium-lmkrc" May 8 06:48:19.288232 kubelet[1564]: I0508 06:48:19.288190 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-lib-modules\") pod \"cilium-lmkrc\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " pod="kube-system/cilium-lmkrc" May 8 06:48:19.288698 kubelet[1564]: I0508 06:48:19.288531 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-bpf-maps\") pod \"cilium-lmkrc\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " pod="kube-system/cilium-lmkrc" May 8 06:48:19.289054 kubelet[1564]: I0508 06:48:19.288991 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-etc-cni-netd\") pod \"cilium-lmkrc\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " pod="kube-system/cilium-lmkrc" May 8 06:48:19.289477 kubelet[1564]: I0508 06:48:19.289378 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cilium-config-path\") pod \"cilium-lmkrc\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " pod="kube-system/cilium-lmkrc" May 8 06:48:19.289841 kubelet[1564]: I0508 06:48:19.289777 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b108c8eb-04c7-4b59-b196-d0b375e5fb44-hubble-tls\") pod \"cilium-lmkrc\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " pod="kube-system/cilium-lmkrc" May 8 06:48:19.291260 kubelet[1564]: I0508 06:48:19.291076 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cilium-cgroup\") pod \"cilium-lmkrc\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " pod="kube-system/cilium-lmkrc" May 8 06:48:19.291555 kubelet[1564]: I0508 06:48:19.291502 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-host-proc-sys-kernel\") pod \"cilium-lmkrc\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " pod="kube-system/cilium-lmkrc" May 8 06:48:19.291801 kubelet[1564]: I0508 06:48:19.291761 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cilium-run\") pod \"cilium-lmkrc\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " pod="kube-system/cilium-lmkrc" May 8 06:48:19.292083 kubelet[1564]: I0508 06:48:19.292044 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-hostproc\") pod \"cilium-lmkrc\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " pod="kube-system/cilium-lmkrc" May 8 06:48:19.292457 kubelet[1564]: I0508 06:48:19.292400 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-xtables-lock\") pod \"cilium-lmkrc\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " pod="kube-system/cilium-lmkrc" May 8 06:48:19.292701 kubelet[1564]: I0508 06:48:19.292664 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b108c8eb-04c7-4b59-b196-d0b375e5fb44-clustermesh-secrets\") pod \"cilium-lmkrc\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " pod="kube-system/cilium-lmkrc" May 8 06:48:19.292957 kubelet[1564]: I0508 06:48:19.292915 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cilium-ipsec-secrets\") pod \"cilium-lmkrc\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " pod="kube-system/cilium-lmkrc" May 8 06:48:19.293241 kubelet[1564]: I0508 06:48:19.293198 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64mpz\" (UniqueName: \"kubernetes.io/projected/b108c8eb-04c7-4b59-b196-d0b375e5fb44-kube-api-access-64mpz\") pod \"cilium-lmkrc\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " pod="kube-system/cilium-lmkrc" May 8 06:48:20.002325 kubelet[1564]: E0508 06:48:20.002211 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:20.291687 kubelet[1564]: E0508 06:48:20.291493 1564 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 8 06:48:20.292311 kubelet[1564]: E0508 06:48:20.292272 1564 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2fe43c3e-e572-446e-a9a1-0917aa155240-cilium-config-path podName:2fe43c3e-e572-446e-a9a1-0917aa155240 nodeName:}" failed. No retries permitted until 2025-05-08 06:48:20.792171346 +0000 UTC m=+87.745104241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/2fe43c3e-e572-446e-a9a1-0917aa155240-cilium-config-path") pod "cilium-operator-599987898-9d8lr" (UID: "2fe43c3e-e572-446e-a9a1-0917aa155240") : failed to sync configmap cache: timed out waiting for the condition May 8 06:48:20.394643 kubelet[1564]: E0508 06:48:20.394537 1564 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 8 06:48:20.394943 kubelet[1564]: E0508 06:48:20.394737 1564 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cilium-config-path podName:b108c8eb-04c7-4b59-b196-d0b375e5fb44 nodeName:}" failed. No retries permitted until 2025-05-08 06:48:20.894669407 +0000 UTC m=+87.847602302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cilium-config-path") pod "cilium-lmkrc" (UID: "b108c8eb-04c7-4b59-b196-d0b375e5fb44") : failed to sync configmap cache: timed out waiting for the condition May 8 06:48:20.934899 env[1251]: time="2025-05-08T06:48:20.934760595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-9d8lr,Uid:2fe43c3e-e572-446e-a9a1-0917aa155240,Namespace:kube-system,Attempt:0,}" May 8 06:48:20.964801 env[1251]: time="2025-05-08T06:48:20.964715765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmkrc,Uid:b108c8eb-04c7-4b59-b196-d0b375e5fb44,Namespace:kube-system,Attempt:0,}" May 8 06:48:20.995867 env[1251]: time="2025-05-08T06:48:20.995305526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 06:48:20.995867 env[1251]: time="2025-05-08T06:48:20.995437496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 06:48:20.995867 env[1251]: time="2025-05-08T06:48:20.995470900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 06:48:20.998195 env[1251]: time="2025-05-08T06:48:20.996535347Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fbd597312a2853f9c01829dee7f367c00478bbb0a75b530f042c7513387dfaeb pid=3096 runtime=io.containerd.runc.v2 May 8 06:48:21.006398 kubelet[1564]: E0508 06:48:21.006327 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:21.023592 env[1251]: time="2025-05-08T06:48:21.023454737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 06:48:21.023943 env[1251]: time="2025-05-08T06:48:21.023894100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 06:48:21.024177 env[1251]: time="2025-05-08T06:48:21.024117343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 06:48:21.024653 env[1251]: time="2025-05-08T06:48:21.024588096Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a22dec47e81d30e41ecc7c204e0300f377a6c245b1d124c06689b44eec219ab pid=3113 runtime=io.containerd.runc.v2 May 8 06:48:21.106756 env[1251]: time="2025-05-08T06:48:21.106709496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-9d8lr,Uid:2fe43c3e-e572-446e-a9a1-0917aa155240,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbd597312a2853f9c01829dee7f367c00478bbb0a75b530f042c7513387dfaeb\"" May 8 06:48:21.110652 env[1251]: time="2025-05-08T06:48:21.110625327Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 06:48:21.111449 env[1251]: time="2025-05-08T06:48:21.111420324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmkrc,Uid:b108c8eb-04c7-4b59-b196-d0b375e5fb44,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a22dec47e81d30e41ecc7c204e0300f377a6c245b1d124c06689b44eec219ab\"" May 8 06:48:21.114574 env[1251]: time="2025-05-08T06:48:21.114543464Z" level=info msg="CreateContainer within sandbox \"3a22dec47e81d30e41ecc7c204e0300f377a6c245b1d124c06689b44eec219ab\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 06:48:21.136720 env[1251]: time="2025-05-08T06:48:21.136636806Z" level=info msg="CreateContainer within sandbox \"3a22dec47e81d30e41ecc7c204e0300f377a6c245b1d124c06689b44eec219ab\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"85071df3f5f1b30201a3ee104cbd9993a7af3e4b781a05f35556f7332a284f2f\"" May 8 06:48:21.138177 env[1251]: time="2025-05-08T06:48:21.138063760Z" level=info msg="StartContainer for \"85071df3f5f1b30201a3ee104cbd9993a7af3e4b781a05f35556f7332a284f2f\"" May 8 06:48:21.204440 env[1251]: time="2025-05-08T06:48:21.204308210Z" level=info msg="StartContainer for \"85071df3f5f1b30201a3ee104cbd9993a7af3e4b781a05f35556f7332a284f2f\" returns successfully" May 8 06:48:21.259172 env[1251]: time="2025-05-08T06:48:21.258824492Z" level=info msg="shim disconnected" id=85071df3f5f1b30201a3ee104cbd9993a7af3e4b781a05f35556f7332a284f2f May 8 06:48:21.260431 env[1251]: time="2025-05-08T06:48:21.260403896Z" level=warning msg="cleaning up after shim disconnected" id=85071df3f5f1b30201a3ee104cbd9993a7af3e4b781a05f35556f7332a284f2f namespace=k8s.io May 8 06:48:21.260538 env[1251]: time="2025-05-08T06:48:21.260520537Z" level=info msg="cleaning up dead shim" May 8 06:48:21.284698 env[1251]: time="2025-05-08T06:48:21.284657552Z" level=warning msg="cleanup warnings time=\"2025-05-08T06:48:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3221 runtime=io.containerd.runc.v2\n" May 8 06:48:21.686538 env[1251]: time="2025-05-08T06:48:21.686410997Z" level=info msg="StopPodSandbox for \"3a22dec47e81d30e41ecc7c204e0300f377a6c245b1d124c06689b44eec219ab\"" May 8 06:48:21.687145 env[1251]: time="2025-05-08T06:48:21.686703552Z" level=info msg="Container to stop \"85071df3f5f1b30201a3ee104cbd9993a7af3e4b781a05f35556f7332a284f2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 06:48:21.757791 env[1251]: time="2025-05-08T06:48:21.757574660Z" level=info msg="shim disconnected" id=3a22dec47e81d30e41ecc7c204e0300f377a6c245b1d124c06689b44eec219ab May 8 06:48:21.757791 env[1251]: time="2025-05-08T06:48:21.757764199Z" level=warning msg="cleaning up after shim disconnected" id=3a22dec47e81d30e41ecc7c204e0300f377a6c245b1d124c06689b44eec219ab namespace=k8s.io May 8 06:48:21.757791 env[1251]: time="2025-05-08T06:48:21.757804826Z" level=info msg="cleaning up dead shim" May 8 06:48:21.777734 env[1251]: time="2025-05-08T06:48:21.777567862Z" level=warning msg="cleanup warnings time=\"2025-05-08T06:48:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3256 runtime=io.containerd.runc.v2\n" May 8 06:48:21.778612 env[1251]: time="2025-05-08T06:48:21.778553601Z" level=info msg="TearDown network for sandbox \"3a22dec47e81d30e41ecc7c204e0300f377a6c245b1d124c06689b44eec219ab\" successfully" May 8 06:48:21.778715 env[1251]: time="2025-05-08T06:48:21.778618363Z" level=info msg="StopPodSandbox for \"3a22dec47e81d30e41ecc7c204e0300f377a6c245b1d124c06689b44eec219ab\" returns successfully" May 8 06:48:21.916254 kubelet[1564]: I0508 06:48:21.916043 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b108c8eb-04c7-4b59-b196-d0b375e5fb44-hubble-tls\") pod \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " May 8 06:48:21.916942 kubelet[1564]: I0508 06:48:21.916823 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cilium-run\") pod \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " May 8 06:48:21.917403 kubelet[1564]: I0508 06:48:21.916959 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b108c8eb-04c7-4b59-b196-d0b375e5fb44" (UID: "b108c8eb-04c7-4b59-b196-d0b375e5fb44"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 06:48:21.917663 kubelet[1564]: I0508 06:48:21.917349 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b108c8eb-04c7-4b59-b196-d0b375e5fb44-clustermesh-secrets\") pod \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " May 8 06:48:21.917949 kubelet[1564]: I0508 06:48:21.917913 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64mpz\" (UniqueName: \"kubernetes.io/projected/b108c8eb-04c7-4b59-b196-d0b375e5fb44-kube-api-access-64mpz\") pod \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " May 8 06:48:21.918341 kubelet[1564]: I0508 06:48:21.918304 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cni-path\") pod \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " May 8 06:48:21.918679 kubelet[1564]: I0508 06:48:21.918581 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-hostproc\") pod \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " May 8 06:48:21.919033 kubelet[1564]: I0508 06:48:21.918945 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-xtables-lock\") pod \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " May 8 06:48:21.919448 kubelet[1564]: I0508 06:48:21.919382 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-host-proc-sys-net\") pod \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " May 8 06:48:21.919777 kubelet[1564]: I0508 06:48:21.919740 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-lib-modules\") pod \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " May 8 06:48:21.920127 kubelet[1564]: I0508 06:48:21.920031 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cilium-config-path\") pod \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " May 8 06:48:21.920453 kubelet[1564]: I0508 06:48:21.920377 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cilium-cgroup\") pod \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " May 8 06:48:21.920776 kubelet[1564]: I0508 06:48:21.920702 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-bpf-maps\") pod \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " May 8 06:48:21.922799 kubelet[1564]: I0508 06:48:21.922761 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-etc-cni-netd\") pod \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " May 8 06:48:21.923150 kubelet[1564]: I0508 06:48:21.923053 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-host-proc-sys-kernel\") pod \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " May 8 06:48:21.923540 kubelet[1564]: I0508 06:48:21.923476 1564 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cilium-ipsec-secrets\") pod \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\" (UID: \"b108c8eb-04c7-4b59-b196-d0b375e5fb44\") " May 8 06:48:21.924024 kubelet[1564]: I0508 06:48:21.923968 1564 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cilium-run\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:21.932349 kubelet[1564]: I0508 06:48:21.921027 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b108c8eb-04c7-4b59-b196-d0b375e5fb44" (UID: "b108c8eb-04c7-4b59-b196-d0b375e5fb44"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 06:48:21.932648 kubelet[1564]: I0508 06:48:21.922664 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cni-path" (OuterVolumeSpecName: "cni-path") pod "b108c8eb-04c7-4b59-b196-d0b375e5fb44" (UID: "b108c8eb-04c7-4b59-b196-d0b375e5fb44"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 06:48:21.933167 kubelet[1564]: I0508 06:48:21.933048 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b108c8eb-04c7-4b59-b196-d0b375e5fb44" (UID: "b108c8eb-04c7-4b59-b196-d0b375e5fb44"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 06:48:21.933167 kubelet[1564]: I0508 06:48:21.922706 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-hostproc" (OuterVolumeSpecName: "hostproc") pod "b108c8eb-04c7-4b59-b196-d0b375e5fb44" (UID: "b108c8eb-04c7-4b59-b196-d0b375e5fb44"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 06:48:21.933167 kubelet[1564]: I0508 06:48:21.929682 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b108c8eb-04c7-4b59-b196-d0b375e5fb44" (UID: "b108c8eb-04c7-4b59-b196-d0b375e5fb44"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 06:48:21.933167 kubelet[1564]: I0508 06:48:21.929768 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b108c8eb-04c7-4b59-b196-d0b375e5fb44" (UID: "b108c8eb-04c7-4b59-b196-d0b375e5fb44"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 06:48:21.933167 kubelet[1564]: I0508 06:48:21.929800 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b108c8eb-04c7-4b59-b196-d0b375e5fb44" (UID: "b108c8eb-04c7-4b59-b196-d0b375e5fb44"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 06:48:21.933699 kubelet[1564]: I0508 06:48:21.931979 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b108c8eb-04c7-4b59-b196-d0b375e5fb44-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b108c8eb-04c7-4b59-b196-d0b375e5fb44" (UID: "b108c8eb-04c7-4b59-b196-d0b375e5fb44"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 06:48:21.933699 kubelet[1564]: I0508 06:48:21.932262 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b108c8eb-04c7-4b59-b196-d0b375e5fb44" (UID: "b108c8eb-04c7-4b59-b196-d0b375e5fb44"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 06:48:21.933699 kubelet[1564]: I0508 06:48:21.933009 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b108c8eb-04c7-4b59-b196-d0b375e5fb44" (UID: "b108c8eb-04c7-4b59-b196-d0b375e5fb44"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 06:48:21.933699 kubelet[1564]: I0508 06:48:21.933500 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b108c8eb-04c7-4b59-b196-d0b375e5fb44" (UID: "b108c8eb-04c7-4b59-b196-d0b375e5fb44"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 06:48:21.934450 kubelet[1564]: I0508 06:48:21.934386 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b108c8eb-04c7-4b59-b196-d0b375e5fb44-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b108c8eb-04c7-4b59-b196-d0b375e5fb44" (UID: "b108c8eb-04c7-4b59-b196-d0b375e5fb44"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 06:48:21.936312 kubelet[1564]: I0508 06:48:21.936259 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b108c8eb-04c7-4b59-b196-d0b375e5fb44-kube-api-access-64mpz" (OuterVolumeSpecName: "kube-api-access-64mpz") pod "b108c8eb-04c7-4b59-b196-d0b375e5fb44" (UID: "b108c8eb-04c7-4b59-b196-d0b375e5fb44"). InnerVolumeSpecName "kube-api-access-64mpz". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 06:48:21.941141 kubelet[1564]: I0508 06:48:21.940952 1564 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b108c8eb-04c7-4b59-b196-d0b375e5fb44" (UID: "b108c8eb-04c7-4b59-b196-d0b375e5fb44"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 06:48:21.954830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a22dec47e81d30e41ecc7c204e0300f377a6c245b1d124c06689b44eec219ab-rootfs.mount: Deactivated successfully. May 8 06:48:21.955477 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a22dec47e81d30e41ecc7c204e0300f377a6c245b1d124c06689b44eec219ab-shm.mount: Deactivated successfully. May 8 06:48:21.955781 systemd[1]: var-lib-kubelet-pods-b108c8eb\x2d04c7\x2d4b59\x2db196\x2dd0b375e5fb44-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d64mpz.mount: Deactivated successfully. May 8 06:48:21.956089 systemd[1]: var-lib-kubelet-pods-b108c8eb\x2d04c7\x2d4b59\x2db196\x2dd0b375e5fb44-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 8 06:48:21.956454 systemd[1]: var-lib-kubelet-pods-b108c8eb\x2d04c7\x2d4b59\x2db196\x2dd0b375e5fb44-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 06:48:21.956715 systemd[1]: var-lib-kubelet-pods-b108c8eb\x2d04c7\x2d4b59\x2db196\x2dd0b375e5fb44-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 06:48:22.007233 kubelet[1564]: E0508 06:48:22.007144 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:22.025087 kubelet[1564]: I0508 06:48:22.025028 1564 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b108c8eb-04c7-4b59-b196-d0b375e5fb44-hubble-tls\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:22.025386 kubelet[1564]: I0508 06:48:22.025351 1564 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b108c8eb-04c7-4b59-b196-d0b375e5fb44-clustermesh-secrets\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:22.025588 kubelet[1564]: I0508 06:48:22.025559 1564 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-64mpz\" (UniqueName: \"kubernetes.io/projected/b108c8eb-04c7-4b59-b196-d0b375e5fb44-kube-api-access-64mpz\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:22.025904 kubelet[1564]: I0508 06:48:22.025872 1564 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cni-path\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:22.026086 kubelet[1564]: I0508 06:48:22.026059 1564 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-hostproc\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:22.026349 kubelet[1564]: I0508 06:48:22.026316 1564 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-xtables-lock\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:22.026569 kubelet[1564]: I0508 06:48:22.026515 1564 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-host-proc-sys-net\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:22.026973 kubelet[1564]: I0508 06:48:22.026733 1564 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-lib-modules\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:22.026973 kubelet[1564]: I0508 06:48:22.026795 1564 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cilium-config-path\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:22.026973 kubelet[1564]: I0508 06:48:22.026846 1564 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cilium-cgroup\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:22.026973 kubelet[1564]: I0508 06:48:22.026870 1564 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-bpf-maps\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:22.026973 kubelet[1564]: I0508 06:48:22.026892 1564 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-etc-cni-netd\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:22.026973 kubelet[1564]: I0508 06:48:22.026914 1564 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b108c8eb-04c7-4b59-b196-d0b375e5fb44-host-proc-sys-kernel\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:22.026973 kubelet[1564]: I0508 06:48:22.026935 1564 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b108c8eb-04c7-4b59-b196-d0b375e5fb44-cilium-ipsec-secrets\") on node \"172.24.4.62\" DevicePath \"\"" May 8 06:48:22.691809 kubelet[1564]: I0508 06:48:22.691699 1564 scope.go:117] "RemoveContainer" containerID="85071df3f5f1b30201a3ee104cbd9993a7af3e4b781a05f35556f7332a284f2f" May 8 06:48:22.696083 env[1251]: time="2025-05-08T06:48:22.695994495Z" level=info msg="RemoveContainer for \"85071df3f5f1b30201a3ee104cbd9993a7af3e4b781a05f35556f7332a284f2f\"" May 8 06:48:22.702913 env[1251]: time="2025-05-08T06:48:22.702854355Z" level=info msg="RemoveContainer for \"85071df3f5f1b30201a3ee104cbd9993a7af3e4b781a05f35556f7332a284f2f\" returns successfully" May 8 06:48:22.726645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3682653956.mount: Deactivated successfully. May 8 06:48:22.859535 kubelet[1564]: I0508 06:48:22.859429 1564 topology_manager.go:215] "Topology Admit Handler" podUID="945b0cac-bdb2-4f79-bd98-d4f6986548d9" podNamespace="kube-system" podName="cilium-jk2xh" May 8 06:48:22.860266 kubelet[1564]: E0508 06:48:22.860196 1564 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b108c8eb-04c7-4b59-b196-d0b375e5fb44" containerName="mount-cgroup" May 8 06:48:22.860628 kubelet[1564]: I0508 06:48:22.860594 1564 memory_manager.go:354] "RemoveStaleState removing state" podUID="b108c8eb-04c7-4b59-b196-d0b375e5fb44" containerName="mount-cgroup" May 8 06:48:23.010052 kubelet[1564]: E0508 06:48:23.009713 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:23.035506 kubelet[1564]: I0508 06:48:23.034582 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/945b0cac-bdb2-4f79-bd98-d4f6986548d9-cilium-cgroup\") pod \"cilium-jk2xh\" (UID: \"945b0cac-bdb2-4f79-bd98-d4f6986548d9\") " pod="kube-system/cilium-jk2xh" May 8 06:48:23.035506 kubelet[1564]: I0508 06:48:23.034720 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/945b0cac-bdb2-4f79-bd98-d4f6986548d9-xtables-lock\") pod \"cilium-jk2xh\" (UID: \"945b0cac-bdb2-4f79-bd98-d4f6986548d9\") " pod="kube-system/cilium-jk2xh" May 8 06:48:23.035506 kubelet[1564]: I0508 06:48:23.034847 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/945b0cac-bdb2-4f79-bd98-d4f6986548d9-cilium-ipsec-secrets\") pod \"cilium-jk2xh\" (UID: \"945b0cac-bdb2-4f79-bd98-d4f6986548d9\") " pod="kube-system/cilium-jk2xh" May 8 06:48:23.035506 kubelet[1564]: I0508 06:48:23.034919 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/945b0cac-bdb2-4f79-bd98-d4f6986548d9-cilium-run\") pod \"cilium-jk2xh\" (UID: \"945b0cac-bdb2-4f79-bd98-d4f6986548d9\") " pod="kube-system/cilium-jk2xh" May 8 06:48:23.035506 kubelet[1564]: I0508 06:48:23.035014 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/945b0cac-bdb2-4f79-bd98-d4f6986548d9-host-proc-sys-kernel\") pod \"cilium-jk2xh\" (UID: \"945b0cac-bdb2-4f79-bd98-d4f6986548d9\") " pod="kube-system/cilium-jk2xh" May 8 06:48:23.035506 kubelet[1564]: I0508 06:48:23.035061 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/945b0cac-bdb2-4f79-bd98-d4f6986548d9-host-proc-sys-net\") pod \"cilium-jk2xh\" (UID: \"945b0cac-bdb2-4f79-bd98-d4f6986548d9\") " pod="kube-system/cilium-jk2xh" May 8 06:48:23.036364 kubelet[1564]: I0508 06:48:23.035187 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/945b0cac-bdb2-4f79-bd98-d4f6986548d9-lib-modules\") pod \"cilium-jk2xh\" (UID: \"945b0cac-bdb2-4f79-bd98-d4f6986548d9\") " pod="kube-system/cilium-jk2xh" May 8 06:48:23.036364 kubelet[1564]: I0508 06:48:23.035229 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/945b0cac-bdb2-4f79-bd98-d4f6986548d9-hostproc\") pod \"cilium-jk2xh\" (UID: \"945b0cac-bdb2-4f79-bd98-d4f6986548d9\") " pod="kube-system/cilium-jk2xh" May 8 06:48:23.036364 kubelet[1564]: I0508 06:48:23.035272 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/945b0cac-bdb2-4f79-bd98-d4f6986548d9-cni-path\") pod \"cilium-jk2xh\" (UID: \"945b0cac-bdb2-4f79-bd98-d4f6986548d9\") " pod="kube-system/cilium-jk2xh" May 8 06:48:23.036364 kubelet[1564]: I0508 06:48:23.035313 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/945b0cac-bdb2-4f79-bd98-d4f6986548d9-etc-cni-netd\") pod \"cilium-jk2xh\" (UID: \"945b0cac-bdb2-4f79-bd98-d4f6986548d9\") " pod="kube-system/cilium-jk2xh" May 8 06:48:23.036364 kubelet[1564]: I0508 06:48:23.035371 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/945b0cac-bdb2-4f79-bd98-d4f6986548d9-clustermesh-secrets\") pod \"cilium-jk2xh\" (UID: \"945b0cac-bdb2-4f79-bd98-d4f6986548d9\") " pod="kube-system/cilium-jk2xh" May 8 06:48:23.036364 kubelet[1564]: I0508 06:48:23.035478 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/945b0cac-bdb2-4f79-bd98-d4f6986548d9-hubble-tls\") pod \"cilium-jk2xh\" (UID: \"945b0cac-bdb2-4f79-bd98-d4f6986548d9\") " pod="kube-system/cilium-jk2xh" May 8 06:48:23.036910 kubelet[1564]: I0508 06:48:23.035536 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/945b0cac-bdb2-4f79-bd98-d4f6986548d9-bpf-maps\") pod \"cilium-jk2xh\" (UID: \"945b0cac-bdb2-4f79-bd98-d4f6986548d9\") " pod="kube-system/cilium-jk2xh" May 8 06:48:23.036910 kubelet[1564]: I0508 06:48:23.035571 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/945b0cac-bdb2-4f79-bd98-d4f6986548d9-cilium-config-path\") pod \"cilium-jk2xh\" (UID: \"945b0cac-bdb2-4f79-bd98-d4f6986548d9\") " pod="kube-system/cilium-jk2xh" May 8 06:48:23.036910 kubelet[1564]: I0508 06:48:23.035642 1564 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wncfz\" (UniqueName: \"kubernetes.io/projected/945b0cac-bdb2-4f79-bd98-d4f6986548d9-kube-api-access-wncfz\") pod \"cilium-jk2xh\" (UID: \"945b0cac-bdb2-4f79-bd98-d4f6986548d9\") " pod="kube-system/cilium-jk2xh" May 8 06:48:23.483238 env[1251]: time="2025-05-08T06:48:23.483078272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jk2xh,Uid:945b0cac-bdb2-4f79-bd98-d4f6986548d9,Namespace:kube-system,Attempt:0,}" May 8 06:48:23.536644 env[1251]: time="2025-05-08T06:48:23.536514344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 06:48:23.537051 env[1251]: time="2025-05-08T06:48:23.536991900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 06:48:23.537428 env[1251]: time="2025-05-08T06:48:23.537371259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 06:48:23.538019 env[1251]: time="2025-05-08T06:48:23.537949755Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c5aff608c2ab590340d880c3173a939babc15c444fac24830e6382982fa7018 pid=3289 runtime=io.containerd.runc.v2 May 8 06:48:23.628466 env[1251]: time="2025-05-08T06:48:23.628405103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jk2xh,Uid:945b0cac-bdb2-4f79-bd98-d4f6986548d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c5aff608c2ab590340d880c3173a939babc15c444fac24830e6382982fa7018\"" May 8 06:48:23.632697 env[1251]: time="2025-05-08T06:48:23.632661999Z" level=info msg="CreateContainer within sandbox \"3c5aff608c2ab590340d880c3173a939babc15c444fac24830e6382982fa7018\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 06:48:23.668176 env[1251]: time="2025-05-08T06:48:23.668119185Z" level=info msg="CreateContainer within sandbox \"3c5aff608c2ab590340d880c3173a939babc15c444fac24830e6382982fa7018\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"10632693402f3947fc8ff0a64d9f605278887a01dad5df4f37d3dc1f02a7bb14\"" May 8 06:48:23.669049 env[1251]: time="2025-05-08T06:48:23.669021233Z" level=info msg="StartContainer for \"10632693402f3947fc8ff0a64d9f605278887a01dad5df4f37d3dc1f02a7bb14\"" May 8 06:48:23.737435 env[1251]: time="2025-05-08T06:48:23.737303728Z" level=info msg="StartContainer for \"10632693402f3947fc8ff0a64d9f605278887a01dad5df4f37d3dc1f02a7bb14\" returns successfully" May 8 06:48:24.009568 env[1251]: time="2025-05-08T06:48:24.009297780Z" level=info msg="shim disconnected" id=10632693402f3947fc8ff0a64d9f605278887a01dad5df4f37d3dc1f02a7bb14 May 8 06:48:24.010020 env[1251]: time="2025-05-08T06:48:24.009399613Z" level=warning msg="cleaning up after shim disconnected" id=10632693402f3947fc8ff0a64d9f605278887a01dad5df4f37d3dc1f02a7bb14 namespace=k8s.io May 8 06:48:24.010020 env[1251]: time="2025-05-08T06:48:24.009795793Z" level=info msg="cleaning up dead shim" May 8 06:48:24.010705 kubelet[1564]: E0508 06:48:24.010628 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:24.052181 env[1251]: time="2025-05-08T06:48:24.052026135Z" level=warning msg="cleanup warnings time=\"2025-05-08T06:48:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3373 runtime=io.containerd.runc.v2\n" May 8 06:48:24.099678 kubelet[1564]: E0508 06:48:24.099366 1564 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 06:48:24.236468 env[1251]: time="2025-05-08T06:48:24.236328832Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:48:24.242637 env[1251]: time="2025-05-08T06:48:24.241600340Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:48:24.252626 env[1251]: time="2025-05-08T06:48:24.252548893Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 06:48:24.255470 env[1251]: time="2025-05-08T06:48:24.255378113Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 8 06:48:24.257202 kubelet[1564]: I0508 06:48:24.257030 1564 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b108c8eb-04c7-4b59-b196-d0b375e5fb44" path="/var/lib/kubelet/pods/b108c8eb-04c7-4b59-b196-d0b375e5fb44/volumes" May 8 06:48:24.265139 env[1251]: time="2025-05-08T06:48:24.264889153Z" level=info msg="CreateContainer within sandbox \"fbd597312a2853f9c01829dee7f367c00478bbb0a75b530f042c7513387dfaeb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 06:48:24.294480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount120349729.mount: Deactivated successfully. May 8 06:48:24.304444 env[1251]: time="2025-05-08T06:48:24.304326091Z" level=info msg="CreateContainer within sandbox \"fbd597312a2853f9c01829dee7f367c00478bbb0a75b530f042c7513387dfaeb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8e890e72a5c5a71c8711a70ac3a740a74e7324fd578ddb03e5e7f8d31ec531ff\"" May 8 06:48:24.307947 env[1251]: time="2025-05-08T06:48:24.307634580Z" level=info msg="StartContainer for \"8e890e72a5c5a71c8711a70ac3a740a74e7324fd578ddb03e5e7f8d31ec531ff\"" May 8 06:48:24.423322 env[1251]: time="2025-05-08T06:48:24.423273744Z" level=info msg="StartContainer for \"8e890e72a5c5a71c8711a70ac3a740a74e7324fd578ddb03e5e7f8d31ec531ff\" returns successfully" May 8 06:48:24.719294 env[1251]: time="2025-05-08T06:48:24.719205107Z" level=info msg="CreateContainer within sandbox \"3c5aff608c2ab590340d880c3173a939babc15c444fac24830e6382982fa7018\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 06:48:24.741930 kubelet[1564]: I0508 06:48:24.741683 1564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-9d8lr" podStartSLOduration=2.59168959 podStartE2EDuration="5.741556135s" podCreationTimestamp="2025-05-08 06:48:19 +0000 UTC" firstStartedPulling="2025-05-08 06:48:21.110145438 +0000 UTC m=+88.063078293" lastFinishedPulling="2025-05-08 06:48:24.260011943 +0000 UTC m=+91.212944838" observedRunningTime="2025-05-08 06:48:24.740366651 +0000 UTC m=+91.693299546" watchObservedRunningTime="2025-05-08 06:48:24.741556135 +0000 UTC m=+91.694489040" May 8 06:48:24.749189 env[1251]: time="2025-05-08T06:48:24.749036998Z" level=info msg="CreateContainer within sandbox \"3c5aff608c2ab590340d880c3173a939babc15c444fac24830e6382982fa7018\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"474c15c5ac747b64cc76831daed493e9230811bd4c44943ceca201083746cf88\"" May 8 06:48:24.752256 env[1251]: time="2025-05-08T06:48:24.752191273Z" level=info msg="StartContainer for \"474c15c5ac747b64cc76831daed493e9230811bd4c44943ceca201083746cf88\"" May 8 06:48:24.832173 env[1251]: time="2025-05-08T06:48:24.832087818Z" level=info msg="StartContainer for \"474c15c5ac747b64cc76831daed493e9230811bd4c44943ceca201083746cf88\" returns successfully" May 8 06:48:24.914696 env[1251]: time="2025-05-08T06:48:24.914597915Z" level=info msg="shim disconnected" id=474c15c5ac747b64cc76831daed493e9230811bd4c44943ceca201083746cf88 May 8 06:48:24.915378 env[1251]: time="2025-05-08T06:48:24.915325403Z" level=warning msg="cleaning up after shim disconnected" id=474c15c5ac747b64cc76831daed493e9230811bd4c44943ceca201083746cf88 namespace=k8s.io May 8 06:48:24.915602 env[1251]: time="2025-05-08T06:48:24.915562784Z" level=info msg="cleaning up dead shim" May 8 06:48:24.935200 env[1251]: time="2025-05-08T06:48:24.935071836Z" level=warning msg="cleanup warnings time=\"2025-05-08T06:48:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3477 runtime=io.containerd.runc.v2\n" May 8 06:48:25.011139 kubelet[1564]: E0508 06:48:25.010880 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:25.727916 env[1251]: time="2025-05-08T06:48:25.727800008Z" level=info msg="CreateContainer within sandbox \"3c5aff608c2ab590340d880c3173a939babc15c444fac24830e6382982fa7018\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 06:48:25.778893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount956311672.mount: Deactivated successfully. May 8 06:48:25.788250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2304987883.mount: Deactivated successfully. May 8 06:48:25.797849 env[1251]: time="2025-05-08T06:48:25.797639245Z" level=info msg="CreateContainer within sandbox \"3c5aff608c2ab590340d880c3173a939babc15c444fac24830e6382982fa7018\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f377b454f73fb9342a37ca319102dbe2feba70701cf7dab0070f366a7f5150b8\"" May 8 06:48:25.800695 env[1251]: time="2025-05-08T06:48:25.800613921Z" level=info msg="StartContainer for \"f377b454f73fb9342a37ca319102dbe2feba70701cf7dab0070f366a7f5150b8\"" May 8 06:48:25.900080 env[1251]: time="2025-05-08T06:48:25.900013387Z" level=info msg="StartContainer for \"f377b454f73fb9342a37ca319102dbe2feba70701cf7dab0070f366a7f5150b8\" returns successfully" May 8 06:48:25.923831 env[1251]: time="2025-05-08T06:48:25.923759463Z" level=info msg="shim disconnected" id=f377b454f73fb9342a37ca319102dbe2feba70701cf7dab0070f366a7f5150b8 May 8 06:48:25.923831 env[1251]: time="2025-05-08T06:48:25.923823925Z" level=warning msg="cleaning up after shim disconnected" id=f377b454f73fb9342a37ca319102dbe2feba70701cf7dab0070f366a7f5150b8 namespace=k8s.io May 8 06:48:25.923831 env[1251]: time="2025-05-08T06:48:25.923837040Z" level=info msg="cleaning up dead shim" May 8 06:48:25.933553 env[1251]: time="2025-05-08T06:48:25.933481682Z" level=warning msg="cleanup warnings time=\"2025-05-08T06:48:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3534 runtime=io.containerd.runc.v2\n" May 8 06:48:26.011912 kubelet[1564]: E0508 06:48:26.011679 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:26.156443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f377b454f73fb9342a37ca319102dbe2feba70701cf7dab0070f366a7f5150b8-rootfs.mount: Deactivated successfully. May 8 06:48:26.735315 env[1251]: time="2025-05-08T06:48:26.735086270Z" level=info msg="CreateContainer within sandbox \"3c5aff608c2ab590340d880c3173a939babc15c444fac24830e6382982fa7018\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 06:48:26.770996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2538251792.mount: Deactivated successfully. May 8 06:48:26.779560 env[1251]: time="2025-05-08T06:48:26.779431832Z" level=info msg="CreateContainer within sandbox \"3c5aff608c2ab590340d880c3173a939babc15c444fac24830e6382982fa7018\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"03bcbb51bcc524abd35e1fe8d5791857de780e3977c491404c34d9d118f48c86\"" May 8 06:48:26.780702 env[1251]: time="2025-05-08T06:48:26.780635132Z" level=info msg="StartContainer for \"03bcbb51bcc524abd35e1fe8d5791857de780e3977c491404c34d9d118f48c86\"" May 8 06:48:26.888679 env[1251]: time="2025-05-08T06:48:26.888596168Z" level=info msg="StartContainer for \"03bcbb51bcc524abd35e1fe8d5791857de780e3977c491404c34d9d118f48c86\" returns successfully" May 8 06:48:26.910940 env[1251]: time="2025-05-08T06:48:26.910865231Z" level=info msg="shim disconnected" id=03bcbb51bcc524abd35e1fe8d5791857de780e3977c491404c34d9d118f48c86 May 8 06:48:26.910940 env[1251]: time="2025-05-08T06:48:26.910930635Z" level=warning msg="cleaning up after shim disconnected" id=03bcbb51bcc524abd35e1fe8d5791857de780e3977c491404c34d9d118f48c86 namespace=k8s.io May 8 06:48:26.910940 env[1251]: time="2025-05-08T06:48:26.910943079Z" level=info msg="cleaning up dead shim" May 8 06:48:26.920696 env[1251]: time="2025-05-08T06:48:26.920646590Z" level=warning msg="cleanup warnings time=\"2025-05-08T06:48:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3588 runtime=io.containerd.runc.v2\n" May 8 06:48:27.013171 kubelet[1564]: E0508 06:48:27.012868 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:27.156909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03bcbb51bcc524abd35e1fe8d5791857de780e3977c491404c34d9d118f48c86-rootfs.mount: Deactivated successfully. May 8 06:48:27.746666 env[1251]: time="2025-05-08T06:48:27.746575971Z" level=info msg="CreateContainer within sandbox \"3c5aff608c2ab590340d880c3173a939babc15c444fac24830e6382982fa7018\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 06:48:27.785544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4045990340.mount: Deactivated successfully. May 8 06:48:27.798947 env[1251]: time="2025-05-08T06:48:27.798847307Z" level=info msg="CreateContainer within sandbox \"3c5aff608c2ab590340d880c3173a939babc15c444fac24830e6382982fa7018\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a24025c58e0e963dda900d168342b9d51620d69eb339a4bc57348acbd976d1ed\"" May 8 06:48:27.800704 env[1251]: time="2025-05-08T06:48:27.800610718Z" level=info msg="StartContainer for \"a24025c58e0e963dda900d168342b9d51620d69eb339a4bc57348acbd976d1ed\"" May 8 06:48:27.899323 env[1251]: time="2025-05-08T06:48:27.899272280Z" level=info msg="StartContainer for \"a24025c58e0e963dda900d168342b9d51620d69eb339a4bc57348acbd976d1ed\" returns successfully" May 8 06:48:28.013360 kubelet[1564]: E0508 06:48:28.013213 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:28.303142 kernel: cryptd: max_cpu_qlen set to 1000 May 8 06:48:28.380171 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) May 8 06:48:28.801653 kubelet[1564]: I0508 06:48:28.800991 1564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jk2xh" podStartSLOduration=6.800956009 podStartE2EDuration="6.800956009s" podCreationTimestamp="2025-05-08 06:48:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 06:48:28.799651198 +0000 UTC m=+95.752584093" watchObservedRunningTime="2025-05-08 06:48:28.800956009 +0000 UTC m=+95.753888904" May 8 06:48:29.013779 kubelet[1564]: E0508 06:48:29.013667 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:29.619262 systemd[1]: run-containerd-runc-k8s.io-a24025c58e0e963dda900d168342b9d51620d69eb339a4bc57348acbd976d1ed-runc.Z7SUUT.mount: Deactivated successfully. May 8 06:48:30.014301 kubelet[1564]: E0508 06:48:30.014059 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:31.014793 kubelet[1564]: E0508 06:48:31.014705 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:31.606576 systemd-networkd[1026]: lxc_health: Link UP May 8 06:48:31.617352 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 8 06:48:31.617040 systemd-networkd[1026]: lxc_health: Gained carrier May 8 06:48:31.850163 systemd[1]: run-containerd-runc-k8s.io-a24025c58e0e963dda900d168342b9d51620d69eb339a4bc57348acbd976d1ed-runc.duK2hZ.mount: Deactivated successfully. May 8 06:48:32.016497 kubelet[1564]: E0508 06:48:32.016235 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:33.016997 kubelet[1564]: E0508 06:48:33.016822 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:33.322900 systemd-networkd[1026]: lxc_health: Gained IPv6LL May 8 06:48:33.916239 kubelet[1564]: E0508 06:48:33.916198 1564 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:34.017999 kubelet[1564]: E0508 06:48:34.017889 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:34.112867 systemd[1]: run-containerd-runc-k8s.io-a24025c58e0e963dda900d168342b9d51620d69eb339a4bc57348acbd976d1ed-runc.ZYjJeB.mount: Deactivated successfully. May 8 06:48:35.018897 kubelet[1564]: E0508 06:48:35.018816 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:36.019936 kubelet[1564]: E0508 06:48:36.019739 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:36.334495 systemd[1]: run-containerd-runc-k8s.io-a24025c58e0e963dda900d168342b9d51620d69eb339a4bc57348acbd976d1ed-runc.aYeiS2.mount: Deactivated successfully. May 8 06:48:37.021119 kubelet[1564]: E0508 06:48:37.021029 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:38.022179 kubelet[1564]: E0508 06:48:38.022082 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:38.629146 systemd[1]: run-containerd-runc-k8s.io-a24025c58e0e963dda900d168342b9d51620d69eb339a4bc57348acbd976d1ed-runc.NpgCpH.mount: Deactivated successfully. May 8 06:48:39.024858 kubelet[1564]: E0508 06:48:39.024191 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:40.025182 kubelet[1564]: E0508 06:48:40.025076 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:41.026364 kubelet[1564]: E0508 06:48:41.026296 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:42.026882 kubelet[1564]: E0508 06:48:42.026769 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:43.027486 kubelet[1564]: E0508 06:48:43.027381 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:44.027893 kubelet[1564]: E0508 06:48:44.027831 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:45.029592 kubelet[1564]: E0508 06:48:45.029484 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:46.030583 kubelet[1564]: E0508 06:48:46.030514 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:47.030741 kubelet[1564]: E0508 06:48:47.030651 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:48.031574 kubelet[1564]: E0508 06:48:48.031513 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:49.032716 kubelet[1564]: E0508 06:48:49.032639 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:50.033925 kubelet[1564]: E0508 06:48:50.033849 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:51.034438 kubelet[1564]: E0508 06:48:51.034346 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:52.034674 kubelet[1564]: E0508 06:48:52.034613 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:53.035088 kubelet[1564]: E0508 06:48:53.035017 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:53.917291 kubelet[1564]: E0508 06:48:53.917175 1564 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:54.016242 env[1251]: time="2025-05-08T06:48:54.016006393Z" level=info msg="StopPodSandbox for \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\"" May 8 06:48:54.017546 env[1251]: time="2025-05-08T06:48:54.016472845Z" level=info msg="TearDown network for sandbox \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\" successfully" May 8 06:48:54.017546 env[1251]: time="2025-05-08T06:48:54.016633669Z" level=info msg="StopPodSandbox for \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\" returns successfully" May 8 06:48:54.020287 env[1251]: time="2025-05-08T06:48:54.020165287Z" level=info msg="RemovePodSandbox for \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\"" May 8 06:48:54.020803 env[1251]: time="2025-05-08T06:48:54.020567678Z" level=info msg="Forcibly stopping sandbox \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\"" May 8 06:48:54.023705 env[1251]: time="2025-05-08T06:48:54.023597486Z" level=info msg="TearDown network for sandbox \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\" successfully" May 8 06:48:54.036248 kubelet[1564]: E0508 06:48:54.036181 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:54.105290 env[1251]: time="2025-05-08T06:48:54.104825016Z" level=info msg="RemovePodSandbox \"0abac4677a8f4c21b2d6ec41abc76fd8298d4646a28d859916b60ac75e9052f8\" returns successfully" May 8 06:48:54.106148 env[1251]: time="2025-05-08T06:48:54.105937099Z" level=info msg="StopPodSandbox for \"3a22dec47e81d30e41ecc7c204e0300f377a6c245b1d124c06689b44eec219ab\"" May 8 06:48:54.106335 env[1251]: time="2025-05-08T06:48:54.106203673Z" level=info msg="TearDown network for sandbox \"3a22dec47e81d30e41ecc7c204e0300f377a6c245b1d124c06689b44eec219ab\" successfully" May 8 06:48:54.106335 env[1251]: time="2025-05-08T06:48:54.106288414Z" level=info msg="StopPodSandbox for \"3a22dec47e81d30e41ecc7c204e0300f377a6c245b1d124c06689b44eec219ab\" returns successfully" May 8 06:48:54.107677 env[1251]: time="2025-05-08T06:48:54.107301489Z" level=info msg="RemovePodSandbox for \"3a22dec47e81d30e41ecc7c204e0300f377a6c245b1d124c06689b44eec219ab\"" May 8 06:48:54.107677 env[1251]: time="2025-05-08T06:48:54.107371301Z" level=info msg="Forcibly stopping sandbox \"3a22dec47e81d30e41ecc7c204e0300f377a6c245b1d124c06689b44eec219ab\"" May 8 06:48:54.107677 env[1251]: time="2025-05-08T06:48:54.107533728Z" level=info msg="TearDown network for sandbox \"3a22dec47e81d30e41ecc7c204e0300f377a6c245b1d124c06689b44eec219ab\" successfully" May 8 06:48:54.119088 env[1251]: time="2025-05-08T06:48:54.118946237Z" level=info msg="RemovePodSandbox \"3a22dec47e81d30e41ecc7c204e0300f377a6c245b1d124c06689b44eec219ab\" returns successfully" May 8 06:48:55.037204 kubelet[1564]: E0508 06:48:55.037138 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:56.037720 kubelet[1564]: E0508 06:48:56.037601 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:57.038841 kubelet[1564]: E0508 06:48:57.038692 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:58.039165 kubelet[1564]: E0508 06:48:58.039021 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:48:59.039677 kubelet[1564]: E0508 06:48:59.039559 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:00.040465 kubelet[1564]: E0508 06:49:00.040391 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:01.041764 kubelet[1564]: E0508 06:49:01.041695 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:02.043842 kubelet[1564]: E0508 06:49:02.043770 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:03.046056 kubelet[1564]: E0508 06:49:03.045968 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:04.046313 kubelet[1564]: E0508 06:49:04.046241 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:05.047812 kubelet[1564]: E0508 06:49:05.047732 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:06.048529 kubelet[1564]: E0508 06:49:06.048457 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:07.048906 kubelet[1564]: E0508 06:49:07.048822 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:08.049385 kubelet[1564]: E0508 06:49:08.049257 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:09.050541 kubelet[1564]: E0508 06:49:09.050476 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:10.051473 kubelet[1564]: E0508 06:49:10.051390 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:11.053522 kubelet[1564]: E0508 06:49:11.053364 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:12.054146 kubelet[1564]: E0508 06:49:12.053994 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:13.054421 kubelet[1564]: E0508 06:49:13.054273 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:13.916753 kubelet[1564]: E0508 06:49:13.916628 1564 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:14.054874 kubelet[1564]: E0508 06:49:14.054811 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:15.056997 kubelet[1564]: E0508 06:49:15.056891 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:16.058053 kubelet[1564]: E0508 06:49:16.057942 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:17.058677 kubelet[1564]: E0508 06:49:17.058431 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:18.059299 kubelet[1564]: E0508 06:49:18.059156 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:19.060043 kubelet[1564]: E0508 06:49:19.059919 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:20.060602 kubelet[1564]: E0508 06:49:20.060459 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:21.060957 kubelet[1564]: E0508 06:49:21.060787 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:22.061664 kubelet[1564]: E0508 06:49:22.061590 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:23.063238 kubelet[1564]: E0508 06:49:23.063072 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:24.064222 kubelet[1564]: E0508 06:49:24.064071 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:25.064700 kubelet[1564]: E0508 06:49:25.064626 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:26.065492 kubelet[1564]: E0508 06:49:26.065426 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:27.066909 kubelet[1564]: E0508 06:49:27.066761 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:28.067651 kubelet[1564]: E0508 06:49:28.067556 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:29.068251 kubelet[1564]: E0508 06:49:29.068188 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:30.070940 kubelet[1564]: E0508 06:49:30.070722 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:31.072962 kubelet[1564]: E0508 06:49:31.072880 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:32.074747 kubelet[1564]: E0508 06:49:32.074684 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:33.076400 kubelet[1564]: E0508 06:49:33.076314 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:33.916414 kubelet[1564]: E0508 06:49:33.916323 1564 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:34.076655 kubelet[1564]: E0508 06:49:34.076546 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:35.077735 kubelet[1564]: E0508 06:49:35.077662 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:36.078175 kubelet[1564]: E0508 06:49:36.078052 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:37.080266 kubelet[1564]: E0508 06:49:37.080200 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:38.081439 kubelet[1564]: E0508 06:49:38.081334 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:39.081752 kubelet[1564]: E0508 06:49:39.081622 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:40.082561 kubelet[1564]: E0508 06:49:40.082456 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 06:49:41.082933 kubelet[1564]: E0508 06:49:41.082799 1564 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"