May 13 07:32:25.881492 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon May 12 23:08:12 -00 2025 May 13 07:32:25.881546 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 07:32:25.881571 kernel: BIOS-provided physical RAM map: May 13 07:32:25.881597 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 07:32:25.881614 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 07:32:25.881631 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 07:32:25.881651 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 13 07:32:25.881668 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 13 07:32:25.881685 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 07:32:25.881701 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 07:32:25.881718 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 13 07:32:25.881734 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 07:32:25.881754 kernel: NX (Execute Disable) protection: active May 13 07:32:25.881771 kernel: SMBIOS 3.0.0 present. May 13 07:32:25.881792 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 13 07:32:25.881809 kernel: Hypervisor detected: KVM May 13 07:32:25.881827 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 07:32:25.881845 kernel: kvm-clock: cpu 0, msr 26196001, primary cpu clock May 13 07:32:25.881866 kernel: kvm-clock: using sched offset of 3875570769 cycles May 13 07:32:25.881886 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 07:32:25.881905 kernel: tsc: Detected 1996.249 MHz processor May 13 07:32:25.881924 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 07:32:25.881944 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 07:32:25.881963 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 13 07:32:25.881981 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 07:32:25.882000 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 13 07:32:25.882018 kernel: ACPI: Early table checksum verification disabled May 13 07:32:25.882040 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 13 07:32:25.882059 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 07:32:25.882077 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 07:32:25.882095 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 07:32:25.882113 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 13 07:32:25.882132 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 07:32:25.882150 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 07:32:25.882168 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 13 07:32:25.882191 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 13 07:32:25.882209 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 13 07:32:25.882227 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 13 07:32:25.882245 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 13 07:32:25.882263 kernel: No NUMA configuration found May 13 07:32:25.882289 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 13 07:32:25.882308 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] May 13 07:32:25.882330 kernel: Zone ranges: May 13 07:32:25.882383 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 07:32:25.882402 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 13 07:32:25.882422 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 13 07:32:25.882440 kernel: Movable zone start for each node May 13 07:32:25.882459 kernel: Early memory node ranges May 13 07:32:25.882478 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 07:32:25.882496 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 13 07:32:25.882521 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 13 07:32:25.882540 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 13 07:32:25.882558 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 07:32:25.882577 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 07:32:25.882596 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 13 07:32:25.882615 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 07:32:25.882634 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 07:32:25.882653 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 07:32:25.882672 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 07:32:25.882695 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 07:32:25.882714 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 07:32:25.882733 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 07:32:25.882752 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 07:32:25.882771 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 07:32:25.882790 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 13 07:32:25.882809 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 13 07:32:25.882828 kernel: Booting paravirtualized kernel on KVM May 13 07:32:25.882847 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 07:32:25.882871 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 13 07:32:25.882890 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 13 07:32:25.882909 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 13 07:32:25.882927 kernel: pcpu-alloc: [0] 0 1 May 13 07:32:25.882945 kernel: kvm-guest: stealtime: cpu 0, msr 13bc1c0c0 May 13 07:32:25.882964 kernel: kvm-guest: PV spinlocks disabled, no host support May 13 07:32:25.882983 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 13 07:32:25.883002 kernel: Policy zone: Normal May 13 07:32:25.883024 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 07:32:25.883048 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 07:32:25.883067 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 07:32:25.883086 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 07:32:25.883105 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 07:32:25.883125 kernel: Memory: 3968276K/4193772K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 225236K reserved, 0K cma-reserved) May 13 07:32:25.883144 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 07:32:25.883164 kernel: ftrace: allocating 34584 entries in 136 pages May 13 07:32:25.883182 kernel: ftrace: allocated 136 pages with 2 groups May 13 07:32:25.883205 kernel: rcu: Hierarchical RCU implementation. May 13 07:32:25.883225 kernel: rcu: RCU event tracing is enabled. May 13 07:32:25.883245 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 07:32:25.883264 kernel: Rude variant of Tasks RCU enabled. May 13 07:32:25.883284 kernel: Tracing variant of Tasks RCU enabled. May 13 07:32:25.883303 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 07:32:25.883322 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 07:32:25.887377 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 13 07:32:25.887406 kernel: Console: colour VGA+ 80x25 May 13 07:32:25.887428 kernel: printk: console [tty0] enabled May 13 07:32:25.887443 kernel: printk: console [ttyS0] enabled May 13 07:32:25.887458 kernel: ACPI: Core revision 20210730 May 13 07:32:25.887472 kernel: APIC: Switch to symmetric I/O mode setup May 13 07:32:25.887487 kernel: x2apic enabled May 13 07:32:25.887501 kernel: Switched APIC routing to physical x2apic. May 13 07:32:25.887516 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 07:32:25.887530 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 07:32:25.887545 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 13 07:32:25.887564 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 13 07:32:25.887579 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 13 07:32:25.887594 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 07:32:25.887608 kernel: Spectre V2 : Mitigation: Retpolines May 13 07:32:25.887623 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 07:32:25.887637 kernel: Speculative Store Bypass: Vulnerable May 13 07:32:25.887652 kernel: x86/fpu: x87 FPU will use FXSAVE May 13 07:32:25.887666 kernel: Freeing SMP alternatives memory: 32K May 13 07:32:25.887680 kernel: pid_max: default: 32768 minimum: 301 May 13 07:32:25.887718 kernel: LSM: Security Framework initializing May 13 07:32:25.887732 kernel: SELinux: Initializing. May 13 07:32:25.887747 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 07:32:25.887761 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 07:32:25.887776 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 13 07:32:25.887791 kernel: Performance Events: AMD PMU driver. May 13 07:32:25.887815 kernel: ... version: 0 May 13 07:32:25.887832 kernel: ... bit width: 48 May 13 07:32:25.887847 kernel: ... generic registers: 4 May 13 07:32:25.887862 kernel: ... value mask: 0000ffffffffffff May 13 07:32:25.887877 kernel: ... max period: 00007fffffffffff May 13 07:32:25.887892 kernel: ... fixed-purpose events: 0 May 13 07:32:25.887909 kernel: ... event mask: 000000000000000f May 13 07:32:25.887925 kernel: signal: max sigframe size: 1440 May 13 07:32:25.887939 kernel: rcu: Hierarchical SRCU implementation. May 13 07:32:25.887954 kernel: smp: Bringing up secondary CPUs ... May 13 07:32:25.887969 kernel: x86: Booting SMP configuration: May 13 07:32:25.887987 kernel: .... node #0, CPUs: #1 May 13 07:32:25.888002 kernel: kvm-clock: cpu 1, msr 26196041, secondary cpu clock May 13 07:32:25.888017 kernel: kvm-guest: stealtime: cpu 1, msr 13bd1c0c0 May 13 07:32:25.888031 kernel: smp: Brought up 1 node, 2 CPUs May 13 07:32:25.888046 kernel: smpboot: Max logical packages: 2 May 13 07:32:25.888061 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 13 07:32:25.888076 kernel: devtmpfs: initialized May 13 07:32:25.888090 kernel: x86/mm: Memory block size: 128MB May 13 07:32:25.888105 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 07:32:25.888123 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 07:32:25.888138 kernel: pinctrl core: initialized pinctrl subsystem May 13 07:32:25.888153 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 07:32:25.888168 kernel: audit: initializing netlink subsys (disabled) May 13 07:32:25.888183 kernel: audit: type=2000 audit(1747121545.462:1): state=initialized audit_enabled=0 res=1 May 13 07:32:25.888198 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 07:32:25.888212 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 07:32:25.888227 kernel: cpuidle: using governor menu May 13 07:32:25.888242 kernel: ACPI: bus type PCI registered May 13 07:32:25.888259 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 07:32:25.888274 kernel: dca service started, version 1.12.1 May 13 07:32:25.888289 kernel: PCI: Using configuration type 1 for base access May 13 07:32:25.888304 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 07:32:25.888319 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 13 07:32:25.888334 kernel: ACPI: Added _OSI(Module Device) May 13 07:32:25.888369 kernel: ACPI: Added _OSI(Processor Device) May 13 07:32:25.888383 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 07:32:25.888398 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 07:32:25.888416 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 13 07:32:25.888431 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 13 07:32:25.888446 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 13 07:32:25.888461 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 07:32:25.888475 kernel: ACPI: Interpreter enabled May 13 07:32:25.888490 kernel: ACPI: PM: (supports S0 S3 S5) May 13 07:32:25.888504 kernel: ACPI: Using IOAPIC for interrupt routing May 13 07:32:25.888515 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 07:32:25.888525 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 13 07:32:25.888538 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 07:32:25.888672 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 13 07:32:25.888758 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. May 13 07:32:25.888771 kernel: acpiphp: Slot [3] registered May 13 07:32:25.888779 kernel: acpiphp: Slot [4] registered May 13 07:32:25.888787 kernel: acpiphp: Slot [5] registered May 13 07:32:25.888795 kernel: acpiphp: Slot [6] registered May 13 07:32:25.888803 kernel: acpiphp: Slot [7] registered May 13 07:32:25.888814 kernel: acpiphp: Slot [8] registered May 13 07:32:25.888821 kernel: acpiphp: Slot [9] registered May 13 07:32:25.888829 kernel: acpiphp: Slot [10] registered May 13 07:32:25.888837 kernel: acpiphp: Slot [11] registered May 13 07:32:25.888845 kernel: acpiphp: Slot [12] registered May 13 07:32:25.888853 kernel: acpiphp: Slot [13] registered May 13 07:32:25.888861 kernel: acpiphp: Slot [14] registered May 13 07:32:25.888869 kernel: acpiphp: Slot [15] registered May 13 07:32:25.888877 kernel: acpiphp: Slot [16] registered May 13 07:32:25.888887 kernel: acpiphp: Slot [17] registered May 13 07:32:25.888895 kernel: acpiphp: Slot [18] registered May 13 07:32:25.888902 kernel: acpiphp: Slot [19] registered May 13 07:32:25.888910 kernel: acpiphp: Slot [20] registered May 13 07:32:25.888918 kernel: acpiphp: Slot [21] registered May 13 07:32:25.888926 kernel: acpiphp: Slot [22] registered May 13 07:32:25.888933 kernel: acpiphp: Slot [23] registered May 13 07:32:25.888941 kernel: acpiphp: Slot [24] registered May 13 07:32:25.888949 kernel: acpiphp: Slot [25] registered May 13 07:32:25.888957 kernel: acpiphp: Slot [26] registered May 13 07:32:25.888967 kernel: acpiphp: Slot [27] registered May 13 07:32:25.888975 kernel: acpiphp: Slot [28] registered May 13 07:32:25.888983 kernel: acpiphp: Slot [29] registered May 13 07:32:25.888991 kernel: acpiphp: Slot [30] registered May 13 07:32:25.888999 kernel: acpiphp: Slot [31] registered May 13 07:32:25.889006 kernel: PCI host bridge to bus 0000:00 May 13 07:32:25.889090 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 07:32:25.889164 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 07:32:25.889241 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 07:32:25.889313 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 07:32:25.889403 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 13 07:32:25.889476 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 07:32:25.889570 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 13 07:32:25.889662 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 13 07:32:25.889757 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 13 07:32:25.889840 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 13 07:32:25.889923 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 13 07:32:25.890005 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 13 07:32:25.890086 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 13 07:32:25.890168 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 13 07:32:25.890255 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 13 07:32:25.890369 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 13 07:32:25.890458 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 13 07:32:25.890553 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 13 07:32:25.890639 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 13 07:32:25.890725 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 13 07:32:25.890810 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 13 07:32:25.890892 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 13 07:32:25.890979 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 07:32:25.891067 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 13 07:32:25.891150 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 13 07:32:25.891233 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 13 07:32:25.891316 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 13 07:32:25.891420 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 13 07:32:25.891510 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 13 07:32:25.891598 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 13 07:32:25.891681 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 13 07:32:25.891780 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 13 07:32:25.891868 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 13 07:32:25.891952 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 13 07:32:25.892034 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 13 07:32:25.892126 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 13 07:32:25.892209 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 13 07:32:25.892290 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 13 07:32:25.894412 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 13 07:32:25.894428 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 07:32:25.894437 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 07:32:25.894445 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 07:32:25.894453 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 07:32:25.894465 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 13 07:32:25.894473 kernel: iommu: Default domain type: Translated May 13 07:32:25.894481 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 07:32:25.894565 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 13 07:32:25.894646 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 07:32:25.894726 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 13 07:32:25.894738 kernel: vgaarb: loaded May 13 07:32:25.894746 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 07:32:25.894755 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 07:32:25.894766 kernel: PTP clock support registered May 13 07:32:25.894774 kernel: PCI: Using ACPI for IRQ routing May 13 07:32:25.894782 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 07:32:25.894790 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 07:32:25.894798 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 13 07:32:25.894806 kernel: clocksource: Switched to clocksource kvm-clock May 13 07:32:25.894814 kernel: VFS: Disk quotas dquot_6.6.0 May 13 07:32:25.894822 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 07:32:25.894830 kernel: pnp: PnP ACPI init May 13 07:32:25.894914 kernel: pnp 00:03: [dma 2] May 13 07:32:25.894928 kernel: pnp: PnP ACPI: found 5 devices May 13 07:32:25.894936 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 07:32:25.894944 kernel: NET: Registered PF_INET protocol family May 13 07:32:25.894952 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 07:32:25.894960 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 07:32:25.894968 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 07:32:25.894977 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 07:32:25.894987 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 13 07:32:25.894996 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 07:32:25.895004 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 07:32:25.895012 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 07:32:25.895020 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 07:32:25.895028 kernel: NET: Registered PF_XDP protocol family May 13 07:32:25.895099 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 07:32:25.895170 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 07:32:25.895240 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 07:32:25.895315 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 13 07:32:25.896432 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 13 07:32:25.896520 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 13 07:32:25.896603 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 13 07:32:25.896684 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds May 13 07:32:25.896696 kernel: PCI: CLS 0 bytes, default 64 May 13 07:32:25.896705 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 13 07:32:25.896713 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 13 07:32:25.896724 kernel: Initialise system trusted keyrings May 13 07:32:25.896733 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 07:32:25.896741 kernel: Key type asymmetric registered May 13 07:32:25.896749 kernel: Asymmetric key parser 'x509' registered May 13 07:32:25.896757 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 07:32:25.896765 kernel: io scheduler mq-deadline registered May 13 07:32:25.896773 kernel: io scheduler kyber registered May 13 07:32:25.896781 kernel: io scheduler bfq registered May 13 07:32:25.896789 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 07:32:25.896800 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 13 07:32:25.896808 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 13 07:32:25.896816 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 13 07:32:25.896824 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 13 07:32:25.896832 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 07:32:25.896840 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 07:32:25.896849 kernel: random: crng init done May 13 07:32:25.896857 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 07:32:25.896865 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 07:32:25.896875 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 07:32:25.896883 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 07:32:25.896967 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 07:32:25.897043 kernel: rtc_cmos 00:04: registered as rtc0 May 13 07:32:25.897116 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T07:32:25 UTC (1747121545) May 13 07:32:25.897188 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 13 07:32:25.897199 kernel: NET: Registered PF_INET6 protocol family May 13 07:32:25.897208 kernel: Segment Routing with IPv6 May 13 07:32:25.897218 kernel: In-situ OAM (IOAM) with IPv6 May 13 07:32:25.897226 kernel: NET: Registered PF_PACKET protocol family May 13 07:32:25.897235 kernel: Key type dns_resolver registered May 13 07:32:25.897242 kernel: IPI shorthand broadcast: enabled May 13 07:32:25.897250 kernel: sched_clock: Marking stable (859266096, 164699923)->(1096009846, -72043827) May 13 07:32:25.897259 kernel: registered taskstats version 1 May 13 07:32:25.897267 kernel: Loading compiled-in X.509 certificates May 13 07:32:25.897275 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 52373c12592f53b0567bb941a0a0fec888191095' May 13 07:32:25.897283 kernel: Key type .fscrypt registered May 13 07:32:25.897292 kernel: Key type fscrypt-provisioning registered May 13 07:32:25.897300 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 07:32:25.897308 kernel: ima: Allocated hash algorithm: sha1 May 13 07:32:25.897317 kernel: ima: No architecture policies found May 13 07:32:25.897324 kernel: clk: Disabling unused clocks May 13 07:32:25.897332 kernel: Freeing unused kernel image (initmem) memory: 47456K May 13 07:32:25.897358 kernel: Write protecting the kernel read-only data: 28672k May 13 07:32:25.897367 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 13 07:32:25.897377 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 13 07:32:25.897385 kernel: Run /init as init process May 13 07:32:25.897393 kernel: with arguments: May 13 07:32:25.897401 kernel: /init May 13 07:32:25.897409 kernel: with environment: May 13 07:32:25.897416 kernel: HOME=/ May 13 07:32:25.897424 kernel: TERM=linux May 13 07:32:25.897432 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 07:32:25.897443 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 07:32:25.897455 systemd[1]: Detected virtualization kvm. May 13 07:32:25.897465 systemd[1]: Detected architecture x86-64. May 13 07:32:25.897473 systemd[1]: Running in initrd. May 13 07:32:25.897482 systemd[1]: No hostname configured, using default hostname. May 13 07:32:25.897490 systemd[1]: Hostname set to . May 13 07:32:25.897500 systemd[1]: Initializing machine ID from VM UUID. May 13 07:32:25.897508 systemd[1]: Queued start job for default target initrd.target. May 13 07:32:25.897519 systemd[1]: Started systemd-ask-password-console.path. May 13 07:32:25.897527 systemd[1]: Reached target cryptsetup.target. May 13 07:32:25.897536 systemd[1]: Reached target paths.target. May 13 07:32:25.897545 systemd[1]: Reached target slices.target. May 13 07:32:25.897553 systemd[1]: Reached target swap.target. May 13 07:32:25.897561 systemd[1]: Reached target timers.target. May 13 07:32:25.897570 systemd[1]: Listening on iscsid.socket. May 13 07:32:25.897579 systemd[1]: Listening on iscsiuio.socket. May 13 07:32:25.897591 systemd[1]: Listening on systemd-journald-audit.socket. May 13 07:32:25.897607 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 07:32:25.897617 systemd[1]: Listening on systemd-journald.socket. May 13 07:32:25.897626 systemd[1]: Listening on systemd-networkd.socket. May 13 07:32:25.897635 systemd[1]: Listening on systemd-udevd-control.socket. May 13 07:32:25.897644 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 07:32:25.897654 systemd[1]: Reached target sockets.target. May 13 07:32:25.897663 systemd[1]: Starting kmod-static-nodes.service... May 13 07:32:25.897672 systemd[1]: Finished network-cleanup.service. May 13 07:32:25.897681 systemd[1]: Starting systemd-fsck-usr.service... May 13 07:32:25.897690 systemd[1]: Starting systemd-journald.service... May 13 07:32:25.897699 systemd[1]: Starting systemd-modules-load.service... May 13 07:32:25.897708 systemd[1]: Starting systemd-resolved.service... May 13 07:32:25.897717 systemd[1]: Starting systemd-vconsole-setup.service... May 13 07:32:25.897726 systemd[1]: Finished kmod-static-nodes.service. May 13 07:32:25.897737 systemd[1]: Finished systemd-fsck-usr.service. May 13 07:32:25.897749 systemd-journald[186]: Journal started May 13 07:32:25.897790 systemd-journald[186]: Runtime Journal (/run/log/journal/fbbe899b507248cbba3cdd484fa6e587) is 8.0M, max 78.4M, 70.4M free. May 13 07:32:25.863382 systemd-modules-load[187]: Inserted module 'overlay' May 13 07:32:25.932020 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 07:32:25.932044 systemd[1]: Started systemd-journald.service. May 13 07:32:25.932058 kernel: audit: type=1130 audit(1747121545.924:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:25.932071 kernel: Bridge firewalling registered May 13 07:32:25.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:25.908528 systemd-resolved[188]: Positive Trust Anchors: May 13 07:32:25.937535 kernel: audit: type=1130 audit(1747121545.932:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:25.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:25.908538 systemd-resolved[188]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 07:32:25.943427 kernel: audit: type=1130 audit(1747121545.937:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:25.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:25.908575 systemd-resolved[188]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 07:32:25.950847 kernel: audit: type=1130 audit(1747121545.943:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:25.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:25.911115 systemd-resolved[188]: Defaulting to hostname 'linux'. May 13 07:32:25.931715 systemd-modules-load[187]: Inserted module 'br_netfilter' May 13 07:32:25.932525 systemd[1]: Started systemd-resolved.service. May 13 07:32:25.938230 systemd[1]: Finished systemd-vconsole-setup.service. May 13 07:32:25.944317 systemd[1]: Reached target nss-lookup.target. May 13 07:32:25.952146 systemd[1]: Starting dracut-cmdline-ask.service... May 13 07:32:25.953549 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 07:32:25.961790 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 07:32:25.970115 kernel: audit: type=1130 audit(1747121545.962:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:25.970141 kernel: SCSI subsystem initialized May 13 07:32:25.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:25.977501 systemd[1]: Finished dracut-cmdline-ask.service. May 13 07:32:25.978806 systemd[1]: Starting dracut-cmdline.service... May 13 07:32:25.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:25.985457 kernel: audit: type=1130 audit(1747121545.977:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:25.985483 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 07:32:25.993350 kernel: device-mapper: uevent: version 1.0.3 May 13 07:32:25.993378 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 13 07:32:25.993683 dracut-cmdline[203]: dracut-dracut-053 May 13 07:32:25.996463 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 07:32:25.998796 systemd-modules-load[187]: Inserted module 'dm_multipath' May 13 07:32:26.009383 kernel: audit: type=1130 audit(1747121545.999:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:25.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:25.999561 systemd[1]: Finished systemd-modules-load.service. May 13 07:32:26.000743 systemd[1]: Starting systemd-sysctl.service... May 13 07:32:26.014707 systemd[1]: Finished systemd-sysctl.service. May 13 07:32:26.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:26.020377 kernel: audit: type=1130 audit(1747121546.014:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:26.058380 kernel: Loading iSCSI transport class v2.0-870. May 13 07:32:26.079369 kernel: iscsi: registered transport (tcp) May 13 07:32:26.105478 kernel: iscsi: registered transport (qla4xxx) May 13 07:32:26.105534 kernel: QLogic iSCSI HBA Driver May 13 07:32:26.156445 systemd[1]: Finished dracut-cmdline.service. May 13 07:32:26.169039 kernel: audit: type=1130 audit(1747121546.156:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:26.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:26.157946 systemd[1]: Starting dracut-pre-udev.service... May 13 07:32:26.239431 kernel: raid6: sse2x4 gen() 9669 MB/s May 13 07:32:26.257407 kernel: raid6: sse2x4 xor() 7209 MB/s May 13 07:32:26.275415 kernel: raid6: sse2x2 gen() 14523 MB/s May 13 07:32:26.293568 kernel: raid6: sse2x2 xor() 8819 MB/s May 13 07:32:26.311413 kernel: raid6: sse2x1 gen() 11412 MB/s May 13 07:32:26.333419 kernel: raid6: sse2x1 xor() 6876 MB/s May 13 07:32:26.333481 kernel: raid6: using algorithm sse2x2 gen() 14523 MB/s May 13 07:32:26.333520 kernel: raid6: .... xor() 8819 MB/s, rmw enabled May 13 07:32:26.334569 kernel: raid6: using ssse3x2 recovery algorithm May 13 07:32:26.352093 kernel: xor: measuring software checksum speed May 13 07:32:26.352151 kernel: prefetch64-sse : 18314 MB/sec May 13 07:32:26.353382 kernel: generic_sse : 15588 MB/sec May 13 07:32:26.353424 kernel: xor: using function: prefetch64-sse (18314 MB/sec) May 13 07:32:26.464416 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 13 07:32:26.475930 systemd[1]: Finished dracut-pre-udev.service. May 13 07:32:26.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:26.476000 audit: BPF prog-id=7 op=LOAD May 13 07:32:26.476000 audit: BPF prog-id=8 op=LOAD May 13 07:32:26.477634 systemd[1]: Starting systemd-udevd.service... May 13 07:32:26.490158 systemd-udevd[385]: Using default interface naming scheme 'v252'. May 13 07:32:26.494812 systemd[1]: Started systemd-udevd.service. May 13 07:32:26.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:26.504460 systemd[1]: Starting dracut-pre-trigger.service... May 13 07:32:26.528631 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation May 13 07:32:26.574905 systemd[1]: Finished dracut-pre-trigger.service. May 13 07:32:26.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:26.576383 systemd[1]: Starting systemd-udev-trigger.service... May 13 07:32:26.622764 systemd[1]: Finished systemd-udev-trigger.service. May 13 07:32:26.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:26.712039 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 13 07:32:26.733877 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 07:32:26.733898 kernel: GPT:17805311 != 20971519 May 13 07:32:26.733910 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 07:32:26.733921 kernel: GPT:17805311 != 20971519 May 13 07:32:26.733931 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 07:32:26.733945 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 07:32:26.740386 kernel: libata version 3.00 loaded. May 13 07:32:26.745362 kernel: ata_piix 0000:00:01.1: version 2.13 May 13 07:32:26.775663 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (440) May 13 07:32:26.775682 kernel: scsi host0: ata_piix May 13 07:32:26.775821 kernel: scsi host1: ata_piix May 13 07:32:26.775936 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 13 07:32:26.775951 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 13 07:32:26.769464 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 13 07:32:26.818406 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 13 07:32:26.818969 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 13 07:32:26.823747 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 13 07:32:26.828850 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 07:32:26.830177 systemd[1]: Starting disk-uuid.service... May 13 07:32:26.844192 disk-uuid[471]: Primary Header is updated. May 13 07:32:26.844192 disk-uuid[471]: Secondary Entries is updated. May 13 07:32:26.844192 disk-uuid[471]: Secondary Header is updated. May 13 07:32:26.853404 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 07:32:26.862369 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 07:32:26.870367 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 07:32:27.872424 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 07:32:27.872730 disk-uuid[472]: The operation has completed successfully. May 13 07:32:27.953774 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 07:32:27.953993 systemd[1]: Finished disk-uuid.service. May 13 07:32:27.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:27.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:27.974820 systemd[1]: Starting verity-setup.service... May 13 07:32:27.999386 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 13 07:32:28.084285 systemd[1]: Found device dev-mapper-usr.device. May 13 07:32:28.086443 systemd[1]: Finished verity-setup.service. May 13 07:32:28.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:28.089833 systemd[1]: Mounting sysusr-usr.mount... May 13 07:32:28.219381 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 13 07:32:28.220229 systemd[1]: Mounted sysusr-usr.mount. May 13 07:32:28.221638 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 13 07:32:28.223215 systemd[1]: Starting ignition-setup.service... May 13 07:32:28.225894 systemd[1]: Starting parse-ip-for-networkd.service... May 13 07:32:28.240927 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 07:32:28.240980 kernel: BTRFS info (device vda6): using free space tree May 13 07:32:28.240992 kernel: BTRFS info (device vda6): has skinny extents May 13 07:32:28.269058 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 07:32:28.284955 systemd[1]: Finished ignition-setup.service. May 13 07:32:28.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:28.286401 systemd[1]: Starting ignition-fetch-offline.service... May 13 07:32:28.348758 systemd[1]: Finished parse-ip-for-networkd.service. May 13 07:32:28.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:28.349000 audit: BPF prog-id=9 op=LOAD May 13 07:32:28.350930 systemd[1]: Starting systemd-networkd.service... May 13 07:32:28.377982 systemd-networkd[643]: lo: Link UP May 13 07:32:28.377993 systemd-networkd[643]: lo: Gained carrier May 13 07:32:28.378869 systemd-networkd[643]: Enumeration completed May 13 07:32:28.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:28.379150 systemd[1]: Started systemd-networkd.service. May 13 07:32:28.379216 systemd-networkd[643]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 07:32:28.380975 systemd[1]: Reached target network.target. May 13 07:32:28.382267 systemd-networkd[643]: eth0: Link UP May 13 07:32:28.382271 systemd-networkd[643]: eth0: Gained carrier May 13 07:32:28.383750 systemd[1]: Starting iscsiuio.service... May 13 07:32:28.390248 systemd[1]: Started iscsiuio.service. May 13 07:32:28.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:28.392481 systemd[1]: Starting iscsid.service... May 13 07:32:28.395866 iscsid[653]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 13 07:32:28.395866 iscsid[653]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 13 07:32:28.395866 iscsid[653]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 13 07:32:28.395866 iscsid[653]: If using hardware iscsi like qla4xxx this message can be ignored. May 13 07:32:28.395866 iscsid[653]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 13 07:32:28.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:28.403690 iscsid[653]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 13 07:32:28.399465 systemd[1]: Started iscsid.service. May 13 07:32:28.401516 systemd[1]: Starting dracut-initqueue.service... May 13 07:32:28.407076 systemd-networkd[643]: eth0: DHCPv4 address 172.24.4.185/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 07:32:28.415490 systemd[1]: Finished dracut-initqueue.service. May 13 07:32:28.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:28.416255 systemd[1]: Reached target remote-fs-pre.target. May 13 07:32:28.416740 systemd[1]: Reached target remote-cryptsetup.target. May 13 07:32:28.417207 systemd[1]: Reached target remote-fs.target. May 13 07:32:28.418405 systemd[1]: Starting dracut-pre-mount.service... May 13 07:32:28.429493 systemd[1]: Finished dracut-pre-mount.service. May 13 07:32:28.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:28.549598 ignition[582]: Ignition 2.14.0 May 13 07:32:28.550585 ignition[582]: Stage: fetch-offline May 13 07:32:28.550791 ignition[582]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 07:32:28.550842 ignition[582]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 07:32:28.553111 ignition[582]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 07:32:28.553339 ignition[582]: parsed url from cmdline: "" May 13 07:32:28.556182 systemd[1]: Finished ignition-fetch-offline.service. May 13 07:32:28.553382 ignition[582]: no config URL provided May 13 07:32:28.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:28.559760 systemd[1]: Starting ignition-fetch.service... May 13 07:32:28.553396 ignition[582]: reading system config file "/usr/lib/ignition/user.ign" May 13 07:32:28.553418 ignition[582]: no config at "/usr/lib/ignition/user.ign" May 13 07:32:28.553432 ignition[582]: failed to fetch config: resource requires networking May 13 07:32:28.554050 ignition[582]: Ignition finished successfully May 13 07:32:28.576683 ignition[667]: Ignition 2.14.0 May 13 07:32:28.576710 ignition[667]: Stage: fetch May 13 07:32:28.576970 ignition[667]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 07:32:28.577013 ignition[667]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 07:32:28.579206 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 07:32:28.579480 ignition[667]: parsed url from cmdline: "" May 13 07:32:28.579490 ignition[667]: no config URL provided May 13 07:32:28.579503 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" May 13 07:32:28.579524 ignition[667]: no config at "/usr/lib/ignition/user.ign" May 13 07:32:28.581939 ignition[667]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 13 07:32:28.588919 ignition[667]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 13 07:32:28.588959 ignition[667]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 13 07:32:28.860147 ignition[667]: GET result: OK May 13 07:32:28.860256 ignition[667]: parsing config with SHA512: b9663ed55c34bb5cda6b330ad5e70b54226e2dbe10f54fa8814b3f884398598944b0c65656dcbe29a2f9f52a14e19d8afce1f117e63212d1b93d92635bb6d17f May 13 07:32:28.871583 unknown[667]: fetched base config from "system" May 13 07:32:28.871616 unknown[667]: fetched base config from "system" May 13 07:32:28.872687 ignition[667]: fetch: fetch complete May 13 07:32:28.871644 unknown[667]: fetched user config from "openstack" May 13 07:32:28.872712 ignition[667]: fetch: fetch passed May 13 07:32:28.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:28.875583 systemd[1]: Finished ignition-fetch.service. May 13 07:32:28.872801 ignition[667]: Ignition finished successfully May 13 07:32:28.889294 systemd[1]: Starting ignition-kargs.service... May 13 07:32:28.924436 ignition[673]: Ignition 2.14.0 May 13 07:32:28.924465 ignition[673]: Stage: kargs May 13 07:32:28.924717 ignition[673]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 07:32:28.924767 ignition[673]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 07:32:28.927034 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 07:32:28.929116 ignition[673]: kargs: kargs passed May 13 07:32:28.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:28.931108 systemd[1]: Finished ignition-kargs.service. May 13 07:32:28.929212 ignition[673]: Ignition finished successfully May 13 07:32:28.935448 systemd[1]: Starting ignition-disks.service... May 13 07:32:28.955473 ignition[679]: Ignition 2.14.0 May 13 07:32:28.956504 ignition[679]: Stage: disks May 13 07:32:28.956814 ignition[679]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 07:32:28.956858 ignition[679]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 07:32:28.959191 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 07:32:28.961410 ignition[679]: disks: disks passed May 13 07:32:28.961502 ignition[679]: Ignition finished successfully May 13 07:32:28.963164 systemd[1]: Finished ignition-disks.service. May 13 07:32:28.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:28.965811 systemd[1]: Reached target initrd-root-device.target. May 13 07:32:28.967432 systemd[1]: Reached target local-fs-pre.target. May 13 07:32:28.969102 systemd[1]: Reached target local-fs.target. May 13 07:32:28.970788 systemd[1]: Reached target sysinit.target. May 13 07:32:28.972465 systemd[1]: Reached target basic.target. May 13 07:32:28.975862 systemd[1]: Starting systemd-fsck-root.service... May 13 07:32:28.997947 systemd-fsck[687]: ROOT: clean, 619/1628000 files, 124060/1617920 blocks May 13 07:32:29.017311 systemd[1]: Finished systemd-fsck-root.service. May 13 07:32:29.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:29.020008 systemd[1]: Mounting sysroot.mount... May 13 07:32:29.038425 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 13 07:32:29.040180 systemd[1]: Mounted sysroot.mount. May 13 07:32:29.042542 systemd[1]: Reached target initrd-root-fs.target. May 13 07:32:29.045816 systemd[1]: Mounting sysroot-usr.mount... May 13 07:32:29.047298 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 13 07:32:29.048503 systemd[1]: Starting flatcar-openstack-hostname.service... May 13 07:32:29.053143 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 07:32:29.053500 systemd[1]: Reached target ignition-diskful.target. May 13 07:32:29.064957 systemd[1]: Mounted sysroot-usr.mount. May 13 07:32:29.068599 systemd[1]: Starting initrd-setup-root.service... May 13 07:32:29.081951 initrd-setup-root[698]: cut: /sysroot/etc/passwd: No such file or directory May 13 07:32:29.105418 initrd-setup-root[706]: cut: /sysroot/etc/group: No such file or directory May 13 07:32:29.116761 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 07:32:29.122657 initrd-setup-root[715]: cut: /sysroot/etc/shadow: No such file or directory May 13 07:32:29.133095 initrd-setup-root[723]: cut: /sysroot/etc/gshadow: No such file or directory May 13 07:32:29.141513 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (713) May 13 07:32:29.158504 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 07:32:29.158559 kernel: BTRFS info (device vda6): using free space tree May 13 07:32:29.158571 kernel: BTRFS info (device vda6): has skinny extents May 13 07:32:29.178780 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 07:32:29.222607 systemd[1]: Finished initrd-setup-root.service. May 13 07:32:29.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:29.224997 systemd[1]: Starting ignition-mount.service... May 13 07:32:29.226932 systemd[1]: Starting sysroot-boot.service... May 13 07:32:29.239818 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 13 07:32:29.239956 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 13 07:32:29.253669 ignition[762]: INFO : Ignition 2.14.0 May 13 07:32:29.253669 ignition[762]: INFO : Stage: mount May 13 07:32:29.254822 ignition[762]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 07:32:29.254822 ignition[762]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 07:32:29.256585 ignition[762]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 07:32:29.256585 ignition[762]: INFO : mount: mount passed May 13 07:32:29.256585 ignition[762]: INFO : Ignition finished successfully May 13 07:32:29.257141 systemd[1]: Finished ignition-mount.service. May 13 07:32:29.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:29.274623 coreos-metadata[693]: May 13 07:32:29.274 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 13 07:32:29.276542 systemd[1]: Finished sysroot-boot.service. May 13 07:32:29.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:29.293039 coreos-metadata[693]: May 13 07:32:29.292 INFO Fetch successful May 13 07:32:29.293615 coreos-metadata[693]: May 13 07:32:29.293 INFO wrote hostname ci-3510-3-7-n-878bc3845f.novalocal to /sysroot/etc/hostname May 13 07:32:29.296947 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 13 07:32:29.297073 systemd[1]: Finished flatcar-openstack-hostname.service. May 13 07:32:29.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:29.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:29.299334 systemd[1]: Starting ignition-files.service... May 13 07:32:29.306464 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 07:32:29.316382 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (770) May 13 07:32:29.320735 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 07:32:29.320765 kernel: BTRFS info (device vda6): using free space tree May 13 07:32:29.320777 kernel: BTRFS info (device vda6): has skinny extents May 13 07:32:29.330120 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 07:32:29.348145 ignition[789]: INFO : Ignition 2.14.0 May 13 07:32:29.348145 ignition[789]: INFO : Stage: files May 13 07:32:29.349408 ignition[789]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 07:32:29.349408 ignition[789]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 07:32:29.351124 ignition[789]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 07:32:29.355083 ignition[789]: DEBUG : files: compiled without relabeling support, skipping May 13 07:32:29.356521 ignition[789]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 07:32:29.356521 ignition[789]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 07:32:29.361366 ignition[789]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 07:32:29.362271 ignition[789]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 07:32:29.363274 unknown[789]: wrote ssh authorized keys file for user: core May 13 07:32:29.364090 ignition[789]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 07:32:29.364090 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 13 07:32:29.364090 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 13 07:32:29.366873 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 07:32:29.366873 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 07:32:29.366873 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 07:32:29.366873 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 07:32:29.366873 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 07:32:29.366873 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 13 07:32:29.441842 systemd-networkd[643]: eth0: Gained IPv6LL May 13 07:32:30.121989 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 13 07:32:31.782273 ignition[789]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 13 07:32:31.783784 ignition[789]: INFO : files: op(7): [started] processing unit "coreos-metadata-sshkeys@.service" May 13 07:32:31.788453 ignition[789]: INFO : files: op(7): [finished] processing unit "coreos-metadata-sshkeys@.service" May 13 07:32:31.788453 ignition[789]: INFO : files: op(8): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 13 07:32:31.788453 ignition[789]: INFO : files: op(8): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 13 07:32:31.796841 ignition[789]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 07:32:31.796841 ignition[789]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 07:32:31.796841 ignition[789]: INFO : files: files passed May 13 07:32:31.796841 ignition[789]: INFO : Ignition finished successfully May 13 07:32:31.799667 systemd[1]: Finished ignition-files.service. May 13 07:32:31.811220 kernel: kauditd_printk_skb: 27 callbacks suppressed May 13 07:32:31.811243 kernel: audit: type=1130 audit(1747121551.803:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:31.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:31.808108 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 13 07:32:31.816281 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 13 07:32:31.817163 systemd[1]: Starting ignition-quench.service... May 13 07:32:31.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:31.823046 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 07:32:31.840517 kernel: audit: type=1130 audit(1747121551.823:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:31.840541 kernel: audit: type=1131 audit(1747121551.823:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:31.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:31.840588 initrd-setup-root-after-ignition[814]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 07:32:31.823162 systemd[1]: Finished ignition-quench.service. May 13 07:32:31.848218 kernel: audit: type=1130 audit(1747121551.842:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:31.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:31.840887 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 13 07:32:31.842960 systemd[1]: Reached target ignition-complete.target. May 13 07:32:31.850994 systemd[1]: Starting initrd-parse-etc.service... May 13 07:32:31.869447 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 07:32:31.870536 systemd[1]: Finished initrd-parse-etc.service. May 13 07:32:31.892520 kernel: audit: type=1130 audit(1747121551.871:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:31.892570 kernel: audit: type=1131 audit(1747121551.871:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:31.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:31.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:31.871539 systemd[1]: Reached target initrd-fs.target. May 13 07:32:31.892486 systemd[1]: Reached target initrd.target. May 13 07:32:31.894246 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 13 07:32:31.896139 systemd[1]: Starting dracut-pre-pivot.service... May 13 07:32:31.925379 systemd[1]: Finished dracut-pre-pivot.service. May 13 07:32:31.939760 kernel: audit: type=1130 audit(1747121551.926:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:31.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:31.928317 systemd[1]: Starting initrd-cleanup.service... May 13 07:32:31.957577 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 07:32:31.957686 systemd[1]: Finished initrd-cleanup.service. May 13 07:32:31.977124 kernel: audit: type=1130 audit(1747121551.959:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:31.977152 kernel: audit: type=1131 audit(1747121551.959:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:31.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:31.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:31.960195 systemd[1]: Stopped target nss-lookup.target. May 13 07:32:31.978215 systemd[1]: Stopped target remote-cryptsetup.target. May 13 07:32:31.979217 systemd[1]: Stopped target timers.target. May 13 07:32:31.980191 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 07:32:31.980243 systemd[1]: Stopped dracut-pre-pivot.service. May 13 07:32:31.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:31.986408 kernel: audit: type=1131 audit(1747121551.981:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:31.986491 systemd[1]: Stopped target initrd.target. May 13 07:32:31.987482 systemd[1]: Stopped target basic.target. May 13 07:32:31.988489 systemd[1]: Stopped target ignition-complete.target. May 13 07:32:31.989498 systemd[1]: Stopped target ignition-diskful.target. May 13 07:32:31.990638 systemd[1]: Stopped target initrd-root-device.target. May 13 07:32:31.992338 systemd[1]: Stopped target remote-fs.target. May 13 07:32:31.993474 systemd[1]: Stopped target remote-fs-pre.target. May 13 07:32:31.995110 systemd[1]: Stopped target sysinit.target. May 13 07:32:31.996733 systemd[1]: Stopped target local-fs.target. May 13 07:32:31.998258 systemd[1]: Stopped target local-fs-pre.target. May 13 07:32:31.999892 systemd[1]: Stopped target swap.target. May 13 07:32:32.001452 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 07:32:32.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.001574 systemd[1]: Stopped dracut-pre-mount.service. May 13 07:32:32.003076 systemd[1]: Stopped target cryptsetup.target. May 13 07:32:32.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.005089 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 07:32:32.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.005184 systemd[1]: Stopped dracut-initqueue.service. May 13 07:32:32.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.007031 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 07:32:32.007120 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 13 07:32:32.008701 systemd[1]: ignition-files.service: Deactivated successfully. May 13 07:32:32.008788 systemd[1]: Stopped ignition-files.service. May 13 07:32:32.011944 systemd[1]: Stopping ignition-mount.service... May 13 07:32:32.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.016177 systemd[1]: Stopping iscsid.service... May 13 07:32:32.022068 iscsid[653]: iscsid shutting down. May 13 07:32:32.016765 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 07:32:32.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.016836 systemd[1]: Stopped kmod-static-nodes.service. May 13 07:32:32.018696 systemd[1]: Stopping sysroot-boot.service... May 13 07:32:32.019320 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 07:32:32.019407 systemd[1]: Stopped systemd-udev-trigger.service. May 13 07:32:32.020582 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 07:32:32.020640 systemd[1]: Stopped dracut-pre-trigger.service. May 13 07:32:32.035598 ignition[827]: INFO : Ignition 2.14.0 May 13 07:32:32.038360 ignition[827]: INFO : Stage: umount May 13 07:32:32.038360 ignition[827]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 13 07:32:32.038360 ignition[827]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 13 07:32:32.038360 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 13 07:32:32.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.040724 systemd[1]: iscsid.service: Deactivated successfully. May 13 07:32:32.040839 systemd[1]: Stopped iscsid.service. May 13 07:32:32.041886 systemd[1]: Stopping iscsiuio.service... May 13 07:32:32.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.047515 systemd[1]: iscsiuio.service: Deactivated successfully. May 13 07:32:32.049019 ignition[827]: INFO : umount: umount passed May 13 07:32:32.049019 ignition[827]: INFO : Ignition finished successfully May 13 07:32:32.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.047602 systemd[1]: Stopped iscsiuio.service. May 13 07:32:32.050563 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 07:32:32.050638 systemd[1]: Stopped ignition-mount.service. May 13 07:32:32.051383 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 07:32:32.051432 systemd[1]: Stopped ignition-disks.service. May 13 07:32:32.051918 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 07:32:32.051957 systemd[1]: Stopped ignition-kargs.service. May 13 07:32:32.052560 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 07:32:32.052607 systemd[1]: Stopped ignition-fetch.service. May 13 07:32:32.053111 systemd[1]: Stopped target network.target. May 13 07:32:32.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.053600 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 07:32:32.053642 systemd[1]: Stopped ignition-fetch-offline.service. May 13 07:32:32.054128 systemd[1]: Stopped target paths.target. May 13 07:32:32.054603 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 07:32:32.058556 systemd[1]: Stopped systemd-ask-password-console.path. May 13 07:32:32.059088 systemd[1]: Stopped target slices.target. May 13 07:32:32.059530 systemd[1]: Stopped target sockets.target. May 13 07:32:32.064119 systemd[1]: iscsid.socket: Deactivated successfully. May 13 07:32:32.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.064157 systemd[1]: Closed iscsid.socket. May 13 07:32:32.064654 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 07:32:32.064686 systemd[1]: Closed iscsiuio.socket. May 13 07:32:32.065657 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 07:32:32.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.065697 systemd[1]: Stopped ignition-setup.service. May 13 07:32:32.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.066960 systemd[1]: Stopping systemd-networkd.service... May 13 07:32:32.088000 audit: BPF prog-id=6 op=UNLOAD May 13 07:32:32.067823 systemd[1]: Stopping systemd-resolved.service... May 13 07:32:32.070384 systemd-networkd[643]: eth0: DHCPv6 lease lost May 13 07:32:32.088000 audit: BPF prog-id=9 op=UNLOAD May 13 07:32:32.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.071822 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 07:32:32.071913 systemd[1]: Stopped systemd-networkd.service. May 13 07:32:32.073442 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 07:32:32.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.073474 systemd[1]: Closed systemd-networkd.socket. May 13 07:32:32.075190 systemd[1]: Stopping network-cleanup.service... May 13 07:32:32.077100 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 07:32:32.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.077170 systemd[1]: Stopped parse-ip-for-networkd.service. May 13 07:32:32.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.077733 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 07:32:32.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.077773 systemd[1]: Stopped systemd-sysctl.service. May 13 07:32:32.078329 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 07:32:32.078403 systemd[1]: Stopped systemd-modules-load.service. May 13 07:32:32.079553 systemd[1]: Stopping systemd-udevd.service... May 13 07:32:32.084542 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 07:32:32.085041 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 07:32:32.085142 systemd[1]: Stopped systemd-resolved.service. May 13 07:32:32.089416 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 07:32:32.089510 systemd[1]: Stopped network-cleanup.service. May 13 07:32:32.092215 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 07:32:32.092336 systemd[1]: Stopped systemd-udevd.service. May 13 07:32:32.094395 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 07:32:32.094432 systemd[1]: Closed systemd-udevd-control.socket. May 13 07:32:32.095221 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 07:32:32.095251 systemd[1]: Closed systemd-udevd-kernel.socket. May 13 07:32:32.096208 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 07:32:32.096247 systemd[1]: Stopped dracut-pre-udev.service. May 13 07:32:32.097229 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 07:32:32.097265 systemd[1]: Stopped dracut-cmdline.service. May 13 07:32:32.098327 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 07:32:32.098383 systemd[1]: Stopped dracut-cmdline-ask.service. May 13 07:32:32.099998 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 13 07:32:32.113997 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 07:32:32.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.114087 systemd[1]: Stopped systemd-vconsole-setup.service. May 13 07:32:32.116645 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 07:32:32.116733 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 13 07:32:32.118450 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 07:32:32.123664 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 07:32:32.123812 systemd[1]: Stopped sysroot-boot.service. May 13 07:32:32.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.124959 systemd[1]: Reached target initrd-switch-root.target. May 13 07:32:32.125853 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 07:32:32.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.125895 systemd[1]: Stopped initrd-setup-root.service. May 13 07:32:32.127518 systemd[1]: Starting initrd-switch-root.service... May 13 07:32:32.145958 systemd[1]: Switching root. May 13 07:32:32.167862 systemd-journald[186]: Journal stopped May 13 07:32:36.715584 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). May 13 07:32:36.715676 kernel: SELinux: Class mctp_socket not defined in policy. May 13 07:32:36.715697 kernel: SELinux: Class anon_inode not defined in policy. May 13 07:32:36.715710 kernel: SELinux: the above unknown classes and permissions will be allowed May 13 07:32:36.715722 kernel: SELinux: policy capability network_peer_controls=1 May 13 07:32:36.715734 kernel: SELinux: policy capability open_perms=1 May 13 07:32:36.715746 kernel: SELinux: policy capability extended_socket_class=1 May 13 07:32:36.715758 kernel: SELinux: policy capability always_check_network=0 May 13 07:32:36.715774 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 07:32:36.715786 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 07:32:36.715797 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 07:32:36.715808 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 07:32:36.715821 systemd[1]: Successfully loaded SELinux policy in 99.532ms. May 13 07:32:36.715838 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.539ms. May 13 07:32:36.715853 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 07:32:36.715866 systemd[1]: Detected virtualization kvm. May 13 07:32:36.715880 systemd[1]: Detected architecture x86-64. May 13 07:32:36.715893 systemd[1]: Detected first boot. May 13 07:32:36.715905 systemd[1]: Hostname set to . May 13 07:32:36.715923 systemd[1]: Initializing machine ID from VM UUID. May 13 07:32:36.715935 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 13 07:32:36.715947 systemd[1]: Populated /etc with preset unit settings. May 13 07:32:36.715961 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 07:32:36.715979 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 07:32:36.715994 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 07:32:36.716007 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 07:32:36.716019 systemd[1]: Stopped initrd-switch-root.service. May 13 07:32:36.716031 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 07:32:36.716044 systemd[1]: Created slice system-addon\x2dconfig.slice. May 13 07:32:36.716058 systemd[1]: Created slice system-addon\x2drun.slice. May 13 07:32:36.716073 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 13 07:32:36.716088 systemd[1]: Created slice system-getty.slice. May 13 07:32:36.716101 systemd[1]: Created slice system-modprobe.slice. May 13 07:32:36.716113 systemd[1]: Created slice system-serial\x2dgetty.slice. May 13 07:32:36.716126 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 13 07:32:36.716140 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 13 07:32:36.716152 systemd[1]: Created slice user.slice. May 13 07:32:36.716170 systemd[1]: Started systemd-ask-password-console.path. May 13 07:32:36.716182 systemd[1]: Started systemd-ask-password-wall.path. May 13 07:32:36.716194 systemd[1]: Set up automount boot.automount. May 13 07:32:36.716207 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 13 07:32:36.716219 systemd[1]: Stopped target initrd-switch-root.target. May 13 07:32:36.716234 systemd[1]: Stopped target initrd-fs.target. May 13 07:32:36.716246 systemd[1]: Stopped target initrd-root-fs.target. May 13 07:32:36.716259 systemd[1]: Reached target integritysetup.target. May 13 07:32:36.716271 systemd[1]: Reached target remote-cryptsetup.target. May 13 07:32:36.716283 systemd[1]: Reached target remote-fs.target. May 13 07:32:36.716295 systemd[1]: Reached target slices.target. May 13 07:32:36.716307 systemd[1]: Reached target swap.target. May 13 07:32:36.716320 systemd[1]: Reached target torcx.target. May 13 07:32:36.716333 systemd[1]: Reached target veritysetup.target. May 13 07:32:36.716383 systemd[1]: Listening on systemd-coredump.socket. May 13 07:32:36.716405 systemd[1]: Listening on systemd-initctl.socket. May 13 07:32:36.716421 systemd[1]: Listening on systemd-networkd.socket. May 13 07:32:36.716435 systemd[1]: Listening on systemd-udevd-control.socket. May 13 07:32:36.716449 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 07:32:36.716463 systemd[1]: Listening on systemd-userdbd.socket. May 13 07:32:36.716477 systemd[1]: Mounting dev-hugepages.mount... May 13 07:32:36.716490 systemd[1]: Mounting dev-mqueue.mount... May 13 07:32:36.716504 systemd[1]: Mounting media.mount... May 13 07:32:36.716518 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 07:32:36.716535 systemd[1]: Mounting sys-kernel-debug.mount... May 13 07:32:36.716549 systemd[1]: Mounting sys-kernel-tracing.mount... May 13 07:32:36.716563 systemd[1]: Mounting tmp.mount... May 13 07:32:36.716577 systemd[1]: Starting flatcar-tmpfiles.service... May 13 07:32:36.716591 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 07:32:36.716605 systemd[1]: Starting kmod-static-nodes.service... May 13 07:32:36.716619 systemd[1]: Starting modprobe@configfs.service... May 13 07:32:36.716633 systemd[1]: Starting modprobe@dm_mod.service... May 13 07:32:36.716647 systemd[1]: Starting modprobe@drm.service... May 13 07:32:36.716663 systemd[1]: Starting modprobe@efi_pstore.service... May 13 07:32:36.716679 systemd[1]: Starting modprobe@fuse.service... May 13 07:32:36.716693 systemd[1]: Starting modprobe@loop.service... May 13 07:32:36.716707 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 07:32:36.716721 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 07:32:36.716736 systemd[1]: Stopped systemd-fsck-root.service. May 13 07:32:36.716749 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 07:32:36.716763 systemd[1]: Stopped systemd-fsck-usr.service. May 13 07:32:36.716779 systemd[1]: Stopped systemd-journald.service. May 13 07:32:36.716793 systemd[1]: Starting systemd-journald.service... May 13 07:32:36.716807 systemd[1]: Starting systemd-modules-load.service... May 13 07:32:36.716822 systemd[1]: Starting systemd-network-generator.service... May 13 07:32:36.716835 systemd[1]: Starting systemd-remount-fs.service... May 13 07:32:36.716849 systemd[1]: Starting systemd-udev-trigger.service... May 13 07:32:36.716863 systemd[1]: verity-setup.service: Deactivated successfully. May 13 07:32:36.716877 systemd[1]: Stopped verity-setup.service. May 13 07:32:36.716891 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 07:32:36.716907 systemd[1]: Mounted dev-hugepages.mount. May 13 07:32:36.716921 systemd[1]: Mounted dev-mqueue.mount. May 13 07:32:36.716935 systemd[1]: Mounted media.mount. May 13 07:32:36.716948 systemd[1]: Mounted sys-kernel-debug.mount. May 13 07:32:36.716963 systemd[1]: Mounted sys-kernel-tracing.mount. May 13 07:32:36.716979 systemd[1]: Mounted tmp.mount. May 13 07:32:36.716993 systemd[1]: Finished kmod-static-nodes.service. May 13 07:32:36.717010 systemd-journald[924]: Journal started May 13 07:32:36.717059 systemd-journald[924]: Runtime Journal (/run/log/journal/fbbe899b507248cbba3cdd484fa6e587) is 8.0M, max 78.4M, 70.4M free. May 13 07:32:32.499000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 07:32:32.643000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 07:32:32.643000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 07:32:32.643000 audit: BPF prog-id=10 op=LOAD May 13 07:32:32.643000 audit: BPF prog-id=10 op=UNLOAD May 13 07:32:32.643000 audit: BPF prog-id=11 op=LOAD May 13 07:32:32.643000 audit: BPF prog-id=11 op=UNLOAD May 13 07:32:32.793000 audit[859]: AVC avc: denied { associate } for pid=859 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 13 07:32:32.793000 audit[859]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d892 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=842 pid=859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 07:32:32.793000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 07:32:32.795000 audit[859]: AVC avc: denied { associate } for pid=859 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 13 07:32:32.795000 audit[859]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d969 a2=1ed a3=0 items=2 ppid=842 pid=859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 07:32:32.795000 audit: CWD cwd="/" May 13 07:32:32.795000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:32.795000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:32.795000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 07:32:36.465000 audit: BPF prog-id=12 op=LOAD May 13 07:32:36.465000 audit: BPF prog-id=3 op=UNLOAD May 13 07:32:36.465000 audit: BPF prog-id=13 op=LOAD May 13 07:32:36.465000 audit: BPF prog-id=14 op=LOAD May 13 07:32:36.465000 audit: BPF prog-id=4 op=UNLOAD May 13 07:32:36.465000 audit: BPF prog-id=5 op=UNLOAD May 13 07:32:36.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.485000 audit: BPF prog-id=12 op=UNLOAD May 13 07:32:36.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.664000 audit: BPF prog-id=15 op=LOAD May 13 07:32:36.664000 audit: BPF prog-id=16 op=LOAD May 13 07:32:36.664000 audit: BPF prog-id=17 op=LOAD May 13 07:32:36.664000 audit: BPF prog-id=13 op=UNLOAD May 13 07:32:36.665000 audit: BPF prog-id=14 op=UNLOAD May 13 07:32:36.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.713000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 07:32:36.713000 audit[924]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc8ee7dbb0 a2=4000 a3=7ffc8ee7dc4c items=0 ppid=1 pid=924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 07:32:36.713000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 13 07:32:36.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.788566 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 07:32:36.720976 systemd[1]: Started systemd-journald.service. May 13 07:32:36.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.463384 systemd[1]: Queued start job for default target multi-user.target. May 13 07:32:36.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.789713 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 07:32:36.463397 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 13 07:32:32.789738 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 07:32:36.466863 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 07:32:32.789789 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:32Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 13 07:32:36.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.720821 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 07:32:32.789802 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:32Z" level=debug msg="skipped missing lower profile" missing profile=oem May 13 07:32:36.721806 systemd[1]: Finished modprobe@dm_mod.service. May 13 07:32:36.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:32.789841 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:32Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 13 07:32:36.722585 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 07:32:32.789859 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:32Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 13 07:32:36.727526 systemd[1]: Finished modprobe@drm.service. May 13 07:32:32.790103 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:32Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 13 07:32:36.728366 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 07:32:32.790148 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 13 07:32:36.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.728507 systemd[1]: Finished modprobe@configfs.service. May 13 07:32:32.790164 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 13 07:32:36.729280 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 07:32:32.792283 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 13 07:32:36.729416 systemd[1]: Finished modprobe@efi_pstore.service. May 13 07:32:32.792324 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 13 07:32:36.730129 systemd[1]: Finished systemd-network-generator.service. May 13 07:32:32.792380 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 13 07:32:36.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.730976 systemd[1]: Finished systemd-remount-fs.service. May 13 07:32:32.792402 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 13 07:32:36.732580 systemd[1]: Reached target network-pre.target. May 13 07:32:32.792423 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 13 07:32:32.792440 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 13 07:32:35.949968 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:35Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 07:32:35.950242 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:35Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 07:32:36.734579 systemd[1]: Mounting sys-kernel-config.mount... May 13 07:32:35.950390 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:35Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 07:32:35.950582 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:35Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 13 07:32:36.735110 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 07:32:35.950641 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:35Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 13 07:32:35.950708 /usr/lib/systemd/system-generators/torcx-generator[859]: time="2025-05-13T07:32:35Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 13 07:32:36.739618 systemd[1]: Starting systemd-hwdb-update.service... May 13 07:32:36.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.741554 systemd[1]: Starting systemd-journal-flush.service... May 13 07:32:36.742099 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 07:32:36.743144 systemd[1]: Starting systemd-random-seed.service... May 13 07:32:36.745310 systemd[1]: Finished systemd-modules-load.service. May 13 07:32:36.746515 systemd[1]: Mounted sys-kernel-config.mount. May 13 07:32:36.748531 systemd[1]: Starting systemd-sysctl.service... May 13 07:32:36.757502 systemd-journald[924]: Time spent on flushing to /var/log/journal/fbbe899b507248cbba3cdd484fa6e587 is 30.304ms for 1064 entries. May 13 07:32:36.757502 systemd-journald[924]: System Journal (/var/log/journal/fbbe899b507248cbba3cdd484fa6e587) is 8.0M, max 584.8M, 576.8M free. May 13 07:32:36.837571 systemd-journald[924]: Received client request to flush runtime journal. May 13 07:32:36.837662 kernel: fuse: init (API version 7.34) May 13 07:32:36.837693 kernel: loop: module loaded May 13 07:32:36.837718 kernel: kauditd_printk_skb: 90 callbacks suppressed May 13 07:32:36.837743 kernel: audit: type=1130 audit(1747121556.829:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.762291 systemd[1]: Finished systemd-random-seed.service. May 13 07:32:36.762926 systemd[1]: Reached target first-boot-complete.target. May 13 07:32:36.767034 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 07:32:36.767177 systemd[1]: Finished modprobe@fuse.service. May 13 07:32:36.769048 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 13 07:32:36.772559 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 13 07:32:36.774803 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 07:32:36.774923 systemd[1]: Finished modprobe@loop.service. May 13 07:32:36.775585 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 07:32:36.794106 systemd[1]: Finished systemd-sysctl.service. May 13 07:32:36.828977 systemd[1]: Finished systemd-udev-trigger.service. May 13 07:32:36.831135 systemd[1]: Starting systemd-udev-settle.service... May 13 07:32:36.839321 systemd[1]: Finished systemd-journal-flush.service. May 13 07:32:36.842670 systemd[1]: Finished flatcar-tmpfiles.service. May 13 07:32:36.848511 kernel: audit: type=1130 audit(1747121556.839:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.848613 kernel: audit: type=1130 audit(1747121556.845:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.847406 systemd[1]: Starting systemd-sysusers.service... May 13 07:32:36.859145 udevadm[966]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 07:32:36.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:36.892533 systemd[1]: Finished systemd-sysusers.service. May 13 07:32:36.899384 kernel: audit: type=1130 audit(1747121556.892:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:37.502193 systemd[1]: Finished systemd-hwdb-update.service. May 13 07:32:37.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:37.520084 kernel: audit: type=1130 audit(1747121557.503:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:37.520234 kernel: audit: type=1334 audit(1747121557.515:134): prog-id=18 op=LOAD May 13 07:32:37.515000 audit: BPF prog-id=18 op=LOAD May 13 07:32:37.517689 systemd[1]: Starting systemd-udevd.service... May 13 07:32:37.515000 audit: BPF prog-id=19 op=LOAD May 13 07:32:37.526436 kernel: audit: type=1334 audit(1747121557.515:135): prog-id=19 op=LOAD May 13 07:32:37.526668 kernel: audit: type=1334 audit(1747121557.515:136): prog-id=7 op=UNLOAD May 13 07:32:37.515000 audit: BPF prog-id=7 op=UNLOAD May 13 07:32:37.515000 audit: BPF prog-id=8 op=UNLOAD May 13 07:32:37.533062 kernel: audit: type=1334 audit(1747121557.515:137): prog-id=8 op=UNLOAD May 13 07:32:37.560734 systemd-udevd[971]: Using default interface naming scheme 'v252'. May 13 07:32:37.610920 systemd[1]: Started systemd-udevd.service. May 13 07:32:37.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:37.628901 kernel: audit: type=1130 audit(1747121557.615:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:37.632978 systemd[1]: Starting systemd-networkd.service... May 13 07:32:37.631000 audit: BPF prog-id=20 op=LOAD May 13 07:32:37.651000 audit: BPF prog-id=21 op=LOAD May 13 07:32:37.652000 audit: BPF prog-id=22 op=LOAD May 13 07:32:37.652000 audit: BPF prog-id=23 op=LOAD May 13 07:32:37.654173 systemd[1]: Starting systemd-userdbd.service... May 13 07:32:37.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:37.696945 systemd[1]: Started systemd-userdbd.service. May 13 07:32:37.699092 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 13 07:32:37.763892 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 07:32:37.774410 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 07:32:37.778370 kernel: ACPI: button: Power Button [PWRF] May 13 07:32:37.791641 systemd-networkd[992]: lo: Link UP May 13 07:32:37.791661 systemd-networkd[992]: lo: Gained carrier May 13 07:32:37.792070 systemd-networkd[992]: Enumeration completed May 13 07:32:37.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:37.792182 systemd[1]: Started systemd-networkd.service. May 13 07:32:37.792465 systemd-networkd[992]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 07:32:37.795391 systemd-networkd[992]: eth0: Link UP May 13 07:32:37.795401 systemd-networkd[992]: eth0: Gained carrier May 13 07:32:37.803467 systemd-networkd[992]: eth0: DHCPv4 address 172.24.4.185/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 13 07:32:37.813000 audit[981]: AVC avc: denied { confidentiality } for pid=981 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 13 07:32:37.813000 audit[981]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55fc0c5ed470 a1=338ac a2=7f21a3e0cbc5 a3=5 items=110 ppid=971 pid=981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 07:32:37.813000 audit: CWD cwd="/" May 13 07:32:37.813000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=1 name=(null) inode=13299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=2 name=(null) inode=13299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=3 name=(null) inode=13300 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=4 name=(null) inode=13299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=5 name=(null) inode=13301 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=6 name=(null) inode=13299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=7 name=(null) inode=13302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=8 name=(null) inode=13302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=9 name=(null) inode=13303 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=10 name=(null) inode=13302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=11 name=(null) inode=13304 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=12 name=(null) inode=13302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=13 name=(null) inode=13305 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=14 name=(null) inode=13302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=15 name=(null) inode=13306 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=16 name=(null) inode=13302 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=17 name=(null) inode=13307 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=18 name=(null) inode=13299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=19 name=(null) inode=13308 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=20 name=(null) inode=13308 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=21 name=(null) inode=13309 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=22 name=(null) inode=13308 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=23 name=(null) inode=13310 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=24 name=(null) inode=13308 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=25 name=(null) inode=13311 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=26 name=(null) inode=13308 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=27 name=(null) inode=13312 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=28 name=(null) inode=13308 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=29 name=(null) inode=14337 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=30 name=(null) inode=13299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=31 name=(null) inode=14338 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=32 name=(null) inode=14338 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=33 name=(null) inode=14339 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=34 name=(null) inode=14338 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=35 name=(null) inode=14340 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=36 name=(null) inode=14338 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=37 name=(null) inode=14341 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=38 name=(null) inode=14338 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=39 name=(null) inode=14342 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=40 name=(null) inode=14338 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=41 name=(null) inode=14343 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=42 name=(null) inode=13299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=43 name=(null) inode=14344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=44 name=(null) inode=14344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=45 name=(null) inode=14345 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=46 name=(null) inode=14344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=47 name=(null) inode=14346 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=48 name=(null) inode=14344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=49 name=(null) inode=14347 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=50 name=(null) inode=14344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=51 name=(null) inode=14348 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=52 name=(null) inode=14344 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=53 name=(null) inode=14349 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=55 name=(null) inode=14350 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=56 name=(null) inode=14350 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=57 name=(null) inode=14351 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=58 name=(null) inode=14350 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=59 name=(null) inode=14352 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=60 name=(null) inode=14350 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=61 name=(null) inode=14353 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=62 name=(null) inode=14353 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=63 name=(null) inode=14354 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=64 name=(null) inode=14353 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=65 name=(null) inode=14355 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=66 name=(null) inode=14353 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=67 name=(null) inode=14356 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=68 name=(null) inode=14353 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=69 name=(null) inode=14357 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=70 name=(null) inode=14353 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=71 name=(null) inode=14358 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=72 name=(null) inode=14350 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=73 name=(null) inode=14359 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=74 name=(null) inode=14359 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=75 name=(null) inode=14360 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=76 name=(null) inode=14359 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=77 name=(null) inode=14361 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=78 name=(null) inode=14359 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=79 name=(null) inode=14362 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=80 name=(null) inode=14359 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=81 name=(null) inode=14363 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=82 name=(null) inode=14359 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=83 name=(null) inode=14364 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=84 name=(null) inode=14350 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=85 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=86 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=87 name=(null) inode=14366 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=88 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=89 name=(null) inode=14367 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=90 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=91 name=(null) inode=14368 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=92 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=93 name=(null) inode=14369 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=94 name=(null) inode=14365 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=95 name=(null) inode=14370 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=96 name=(null) inode=14350 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=97 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=98 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=99 name=(null) inode=14372 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=100 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=101 name=(null) inode=14373 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=102 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=103 name=(null) inode=14374 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=104 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=105 name=(null) inode=14375 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=106 name=(null) inode=14371 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=107 name=(null) inode=14376 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PATH item=109 name=(null) inode=14377 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 07:32:37.813000 audit: PROCTITLE proctitle="(udev-worker)" May 13 07:32:37.835399 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 13 07:32:37.838391 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 07:32:37.845370 kernel: mousedev: PS/2 mouse device common for all mice May 13 07:32:37.893718 systemd[1]: Finished systemd-udev-settle.service. May 13 07:32:37.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:37.895508 systemd[1]: Starting lvm2-activation-early.service... May 13 07:32:37.928024 lvm[1005]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 07:32:37.968119 systemd[1]: Finished lvm2-activation-early.service. May 13 07:32:37.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:37.969645 systemd[1]: Reached target cryptsetup.target. May 13 07:32:37.973417 systemd[1]: Starting lvm2-activation.service... May 13 07:32:37.980474 lvm[1006]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 07:32:38.018563 systemd[1]: Finished lvm2-activation.service. May 13 07:32:38.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:38.020033 systemd[1]: Reached target local-fs-pre.target. May 13 07:32:38.021231 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 07:32:38.021295 systemd[1]: Reached target local-fs.target. May 13 07:32:38.032017 systemd[1]: Reached target machines.target. May 13 07:32:38.035796 systemd[1]: Starting ldconfig.service... May 13 07:32:38.038564 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 07:32:38.038666 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 07:32:38.042142 systemd[1]: Starting systemd-boot-update.service... May 13 07:32:38.047206 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 13 07:32:38.055817 systemd[1]: Starting systemd-machine-id-commit.service... May 13 07:32:38.062673 systemd[1]: Starting systemd-sysext.service... May 13 07:32:38.065554 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1008 (bootctl) May 13 07:32:38.070467 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 13 07:32:38.091866 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 13 07:32:38.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:38.117091 systemd[1]: Unmounting usr-share-oem.mount... May 13 07:32:38.158313 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 13 07:32:38.158772 systemd[1]: Unmounted usr-share-oem.mount. May 13 07:32:38.198683 kernel: loop0: detected capacity change from 0 to 218376 May 13 07:32:38.383272 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 07:32:38.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:38.386285 systemd[1]: Finished systemd-machine-id-commit.service. May 13 07:32:38.436413 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 07:32:38.478483 kernel: loop1: detected capacity change from 0 to 218376 May 13 07:32:38.524520 (sd-sysext)[1022]: Using extensions 'kubernetes'. May 13 07:32:38.526966 (sd-sysext)[1022]: Merged extensions into '/usr'. May 13 07:32:38.564196 systemd-fsck[1019]: fsck.fat 4.2 (2021-01-31) May 13 07:32:38.564196 systemd-fsck[1019]: /dev/vda1: 790 files, 120692/258078 clusters May 13 07:32:38.578277 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 13 07:32:38.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:38.583963 systemd[1]: Mounting boot.mount... May 13 07:32:38.584607 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 07:32:38.589944 systemd[1]: Mounting usr-share-oem.mount... May 13 07:32:38.590918 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 07:32:38.592297 systemd[1]: Starting modprobe@dm_mod.service... May 13 07:32:38.594832 systemd[1]: Starting modprobe@efi_pstore.service... May 13 07:32:38.596511 systemd[1]: Starting modprobe@loop.service... May 13 07:32:38.597058 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 07:32:38.597188 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 07:32:38.597326 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 07:32:38.600593 systemd[1]: Mounted usr-share-oem.mount. May 13 07:32:38.601535 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 07:32:38.601669 systemd[1]: Finished modprobe@dm_mod.service. May 13 07:32:38.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:38.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:38.602545 systemd[1]: Finished systemd-sysext.service. May 13 07:32:38.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:38.603387 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 07:32:38.603512 systemd[1]: Finished modprobe@efi_pstore.service. May 13 07:32:38.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:38.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:38.604379 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 07:32:38.604531 systemd[1]: Finished modprobe@loop.service. May 13 07:32:38.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:38.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:38.609657 systemd[1]: Starting ensure-sysext.service... May 13 07:32:38.610190 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 07:32:38.610245 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 07:32:38.611385 systemd[1]: Starting systemd-tmpfiles-setup.service... May 13 07:32:38.617303 systemd[1]: Mounted boot.mount. May 13 07:32:38.625639 systemd[1]: Reloading. May 13 07:32:38.652024 systemd-tmpfiles[1030]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 13 07:32:38.668089 systemd-tmpfiles[1030]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 07:32:38.684968 systemd-tmpfiles[1030]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 07:32:38.702280 /usr/lib/systemd/system-generators/torcx-generator[1049]: time="2025-05-13T07:32:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 07:32:38.702314 /usr/lib/systemd/system-generators/torcx-generator[1049]: time="2025-05-13T07:32:38Z" level=info msg="torcx already run" May 13 07:32:38.836128 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 07:32:38.836474 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 07:32:38.862520 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 07:32:38.930000 audit: BPF prog-id=24 op=LOAD May 13 07:32:38.930000 audit: BPF prog-id=15 op=UNLOAD May 13 07:32:38.930000 audit: BPF prog-id=25 op=LOAD May 13 07:32:38.930000 audit: BPF prog-id=26 op=LOAD May 13 07:32:38.930000 audit: BPF prog-id=16 op=UNLOAD May 13 07:32:38.930000 audit: BPF prog-id=17 op=UNLOAD May 13 07:32:38.931000 audit: BPF prog-id=27 op=LOAD May 13 07:32:38.931000 audit: BPF prog-id=28 op=LOAD May 13 07:32:38.931000 audit: BPF prog-id=18 op=UNLOAD May 13 07:32:38.931000 audit: BPF prog-id=19 op=UNLOAD May 13 07:32:38.933000 audit: BPF prog-id=29 op=LOAD May 13 07:32:38.933000 audit: BPF prog-id=21 op=UNLOAD May 13 07:32:38.933000 audit: BPF prog-id=30 op=LOAD May 13 07:32:38.933000 audit: BPF prog-id=31 op=LOAD May 13 07:32:38.933000 audit: BPF prog-id=22 op=UNLOAD May 13 07:32:38.933000 audit: BPF prog-id=23 op=UNLOAD May 13 07:32:38.935000 audit: BPF prog-id=32 op=LOAD May 13 07:32:38.936000 audit: BPF prog-id=20 op=UNLOAD May 13 07:32:38.943381 systemd[1]: Finished systemd-boot-update.service. May 13 07:32:38.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:38.945235 systemd[1]: Finished systemd-tmpfiles-setup.service. May 13 07:32:38.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:38.948865 systemd[1]: Starting audit-rules.service... May 13 07:32:38.950598 systemd[1]: Starting clean-ca-certificates.service... May 13 07:32:38.953000 audit: BPF prog-id=33 op=LOAD May 13 07:32:38.952460 systemd[1]: Starting systemd-journal-catalog-update.service... May 13 07:32:38.955224 systemd[1]: Starting systemd-resolved.service... May 13 07:32:38.958000 audit: BPF prog-id=34 op=LOAD May 13 07:32:38.959807 systemd[1]: Starting systemd-timesyncd.service... May 13 07:32:38.961825 systemd[1]: Starting systemd-update-utmp.service... May 13 07:32:38.972910 systemd[1]: Finished clean-ca-certificates.service. May 13 07:32:38.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:38.973660 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 07:32:38.985000 audit[1102]: SYSTEM_BOOT pid=1102 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 13 07:32:38.988068 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 07:32:38.989517 systemd[1]: Starting modprobe@dm_mod.service... May 13 07:32:38.991912 systemd[1]: Starting modprobe@efi_pstore.service... May 13 07:32:38.994688 systemd[1]: Starting modprobe@loop.service... May 13 07:32:38.996117 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 07:32:38.996285 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 07:32:38.996456 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 07:32:38.997426 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 07:32:38.997573 systemd[1]: Finished modprobe@dm_mod.service. May 13 07:32:38.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:38.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:38.998840 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 07:32:38.998954 systemd[1]: Finished modprobe@efi_pstore.service. May 13 07:32:38.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:38.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:39.003337 systemd[1]: Finished systemd-update-utmp.service. May 13 07:32:39.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:39.004326 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 07:32:39.004466 systemd[1]: Finished modprobe@loop.service. May 13 07:32:39.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:39.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:39.005959 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 07:32:39.008648 systemd[1]: Starting modprobe@dm_mod.service... May 13 07:32:39.010453 systemd[1]: Starting modprobe@efi_pstore.service... May 13 07:32:39.011000 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 07:32:39.011152 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 07:32:39.011299 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 07:32:39.016579 ldconfig[1007]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 07:32:39.017565 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 07:32:39.019080 systemd[1]: Starting modprobe@drm.service... May 13 07:32:39.020828 systemd[1]: Starting modprobe@loop.service... May 13 07:32:39.021412 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 07:32:39.021564 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 07:32:39.024900 systemd[1]: Starting systemd-networkd-wait-online.service... May 13 07:32:39.025564 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 07:32:39.027123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 07:32:39.027275 systemd[1]: Finished modprobe@efi_pstore.service. May 13 07:32:39.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:39.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:39.028231 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 07:32:39.028358 systemd[1]: Finished modprobe@loop.service. May 13 07:32:39.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:39.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:39.029300 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 07:32:39.030604 systemd[1]: Finished ensure-sysext.service. May 13 07:32:39.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:39.035632 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 07:32:39.035790 systemd[1]: Finished modprobe@drm.service. May 13 07:32:39.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:39.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:39.038669 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 07:32:39.038794 systemd[1]: Finished modprobe@dm_mod.service. May 13 07:32:39.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:39.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:39.039433 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 07:32:39.041522 systemd[1]: Finished ldconfig.service. May 13 07:32:39.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:39.043923 systemd[1]: Finished systemd-journal-catalog-update.service. May 13 07:32:39.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:39.045666 systemd[1]: Starting systemd-update-done.service... May 13 07:32:39.053589 systemd[1]: Finished systemd-update-done.service. May 13 07:32:39.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 07:32:39.084190 augenrules[1127]: No rules May 13 07:32:39.083000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 13 07:32:39.083000 audit[1127]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff837fb6b0 a2=420 a3=0 items=0 ppid=1097 pid=1127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 07:32:39.083000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 13 07:32:39.084517 systemd[1]: Finished audit-rules.service. May 13 07:32:39.094376 systemd[1]: Started systemd-timesyncd.service. May 13 07:32:39.094974 systemd[1]: Reached target time-set.target. May 13 07:32:39.095580 systemd-resolved[1100]: Positive Trust Anchors: May 13 07:32:39.095596 systemd-resolved[1100]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 07:32:39.095632 systemd-resolved[1100]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 07:32:39.103766 systemd-resolved[1100]: Using system hostname 'ci-3510-3-7-n-878bc3845f.novalocal'. May 13 07:32:39.105104 systemd[1]: Started systemd-resolved.service. May 13 07:32:39.105589 systemd-networkd[992]: eth0: Gained IPv6LL May 13 07:32:39.105678 systemd[1]: Reached target network.target. May 13 07:32:39.106122 systemd[1]: Reached target nss-lookup.target. May 13 07:32:39.106584 systemd[1]: Reached target sysinit.target. May 13 07:32:39.107132 systemd[1]: Started motdgen.path. May 13 07:32:39.107613 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 13 07:32:39.108253 systemd[1]: Started logrotate.timer. May 13 07:32:39.108779 systemd[1]: Started mdadm.timer. May 13 07:32:39.109190 systemd[1]: Started systemd-tmpfiles-clean.timer. May 13 07:32:39.109653 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 07:32:39.109682 systemd[1]: Reached target paths.target. May 13 07:32:39.110098 systemd[1]: Reached target timers.target. May 13 07:32:39.110806 systemd[1]: Listening on dbus.socket. May 13 07:32:39.112254 systemd[1]: Starting docker.socket... May 13 07:32:39.115887 systemd[1]: Listening on sshd.socket. May 13 07:32:39.116479 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 07:32:39.117096 systemd[1]: Finished systemd-networkd-wait-online.service. May 13 07:32:39.117732 systemd[1]: Listening on docker.socket. May 13 07:32:39.118248 systemd[1]: Reached target network-online.target. May 13 07:32:39.118726 systemd[1]: Reached target sockets.target. May 13 07:32:39.119157 systemd[1]: Reached target basic.target. May 13 07:32:39.119670 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 07:32:39.119702 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 07:32:39.120737 systemd[1]: Starting containerd.service... May 13 07:32:39.123033 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 13 07:32:39.124569 systemd[1]: Starting dbus.service... May 13 07:32:39.126916 systemd[1]: Starting enable-oem-cloudinit.service... May 13 07:32:39.129305 systemd[1]: Starting extend-filesystems.service... May 13 07:32:39.135324 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 13 07:32:39.137244 systemd[1]: Starting kubelet.service... May 13 07:32:39.139015 systemd[1]: Starting motdgen.service... May 13 07:32:39.142487 systemd[1]: Starting ssh-key-proc-cmdline.service... May 13 07:32:39.145435 systemd[1]: Starting sshd-keygen.service... May 13 07:32:39.150335 jq[1140]: false May 13 07:32:39.150479 systemd[1]: Starting systemd-logind.service... May 13 07:32:39.150967 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 07:32:39.151030 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 07:32:39.151496 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 07:32:39.154539 systemd[1]: Starting update-engine.service... May 13 07:32:39.161959 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 13 07:32:39.164328 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 07:32:39.164512 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 13 07:32:39.165369 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 07:32:39.172679 jq[1153]: true May 13 07:32:39.165522 systemd[1]: Finished ssh-key-proc-cmdline.service. May 13 07:32:39.166228 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 07:32:39.166250 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 07:32:39.191234 jq[1156]: true May 13 07:32:39.207491 extend-filesystems[1141]: Found loop1 May 13 07:32:39.209166 extend-filesystems[1141]: Found vda May 13 07:32:39.209166 extend-filesystems[1141]: Found vda1 May 13 07:32:39.209166 extend-filesystems[1141]: Found vda2 May 13 07:32:39.209166 extend-filesystems[1141]: Found vda3 May 13 07:32:39.209166 extend-filesystems[1141]: Found usr May 13 07:32:39.209166 extend-filesystems[1141]: Found vda4 May 13 07:32:39.209166 extend-filesystems[1141]: Found vda6 May 13 07:32:39.213618 extend-filesystems[1141]: Found vda7 May 13 07:32:39.213618 extend-filesystems[1141]: Found vda9 May 13 07:32:39.213618 extend-filesystems[1141]: Checking size of /dev/vda9 May 13 07:32:39.218765 systemd[1]: motdgen.service: Deactivated successfully. May 13 07:32:39.218932 systemd[1]: Finished motdgen.service. May 13 07:32:40.007847 systemd-resolved[1100]: Clock change detected. Flushing caches. May 13 07:32:40.007998 systemd-timesyncd[1101]: Contacted time server 20.150.221.209:123 (0.flatcar.pool.ntp.org). May 13 07:32:40.008074 systemd-timesyncd[1101]: Initial clock synchronization to Tue 2025-05-13 07:32:40.007788 UTC. May 13 07:32:40.010246 dbus-daemon[1137]: [system] SELinux support is enabled May 13 07:32:40.010382 systemd[1]: Started dbus.service. May 13 07:32:40.013472 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 07:32:40.013507 systemd[1]: Reached target system-config.target. May 13 07:32:40.014092 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 07:32:40.014117 systemd[1]: Reached target user-config.target. May 13 07:32:40.025846 extend-filesystems[1141]: Resized partition /dev/vda9 May 13 07:32:40.042105 extend-filesystems[1188]: resize2fs 1.46.5 (30-Dec-2021) May 13 07:32:40.052449 systemd[1]: Created slice system-sshd.slice. May 13 07:32:40.067466 env[1157]: time="2025-05-13T07:32:40.067385906Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 13 07:32:40.084017 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 13 07:32:40.088004 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 13 07:32:40.100006 update_engine[1149]: I0513 07:32:40.099030 1149 main.cc:92] Flatcar Update Engine starting May 13 07:32:40.140321 update_engine[1149]: I0513 07:32:40.108395 1149 update_check_scheduler.cc:74] Next update check in 2m30s May 13 07:32:40.140390 env[1157]: time="2025-05-13T07:32:40.121653054Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 07:32:40.108396 systemd[1]: Started update-engine.service. May 13 07:32:40.140528 env[1157]: time="2025-05-13T07:32:40.140382641Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 07:32:40.110673 systemd[1]: Started locksmithd.service. May 13 07:32:40.141766 env[1157]: time="2025-05-13T07:32:40.141713096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 07:32:40.141766 env[1157]: time="2025-05-13T07:32:40.141763671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 07:32:40.142248 systemd-logind[1147]: Watching system buttons on /dev/input/event1 (Power Button) May 13 07:32:40.142273 systemd-logind[1147]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 07:32:40.142829 extend-filesystems[1188]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 07:32:40.142829 extend-filesystems[1188]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 07:32:40.142829 extend-filesystems[1188]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 13 07:32:40.154240 extend-filesystems[1141]: Resized filesystem in /dev/vda9 May 13 07:32:40.156007 env[1157]: time="2025-05-13T07:32:40.145327285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 07:32:40.156007 env[1157]: time="2025-05-13T07:32:40.145355888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 07:32:40.156007 env[1157]: time="2025-05-13T07:32:40.145375134Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 13 07:32:40.156007 env[1157]: time="2025-05-13T07:32:40.145387558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 07:32:40.156007 env[1157]: time="2025-05-13T07:32:40.145482446Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 07:32:40.156007 env[1157]: time="2025-05-13T07:32:40.145780505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 07:32:40.156007 env[1157]: time="2025-05-13T07:32:40.145922531Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 07:32:40.156007 env[1157]: time="2025-05-13T07:32:40.145943641Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 07:32:40.156007 env[1157]: time="2025-05-13T07:32:40.146058867Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 13 07:32:40.156007 env[1157]: time="2025-05-13T07:32:40.146075478Z" level=info msg="metadata content store policy set" policy=shared May 13 07:32:40.143488 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 07:32:40.143641 systemd[1]: Finished extend-filesystems.service. May 13 07:32:40.143941 systemd-logind[1147]: New seat seat0. May 13 07:32:40.153720 systemd[1]: Started systemd-logind.service. May 13 07:32:40.170671 bash[1190]: Updated "/home/core/.ssh/authorized_keys" May 13 07:32:40.171330 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 13 07:32:40.183335 env[1157]: time="2025-05-13T07:32:40.181767769Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 07:32:40.183335 env[1157]: time="2025-05-13T07:32:40.181855293Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 07:32:40.183335 env[1157]: time="2025-05-13T07:32:40.181876292Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 07:32:40.183335 env[1157]: time="2025-05-13T07:32:40.181931115Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 07:32:40.183335 env[1157]: time="2025-05-13T07:32:40.181950622Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 07:32:40.183335 env[1157]: time="2025-05-13T07:32:40.181966472Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 07:32:40.183335 env[1157]: time="2025-05-13T07:32:40.182006917Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 07:32:40.183335 env[1157]: time="2025-05-13T07:32:40.182027306Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 07:32:40.183335 env[1157]: time="2025-05-13T07:32:40.182043566Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 13 07:32:40.183335 env[1157]: time="2025-05-13T07:32:40.182059336Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 07:32:40.183335 env[1157]: time="2025-05-13T07:32:40.182093780Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 07:32:40.183335 env[1157]: time="2025-05-13T07:32:40.182111083Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 07:32:40.183335 env[1157]: time="2025-05-13T07:32:40.182276633Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 07:32:40.183335 env[1157]: time="2025-05-13T07:32:40.182389004Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 07:32:40.183673 env[1157]: time="2025-05-13T07:32:40.182781220Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 07:32:40.183673 env[1157]: time="2025-05-13T07:32:40.182814021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 07:32:40.183673 env[1157]: time="2025-05-13T07:32:40.182852984Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 07:32:40.183673 env[1157]: time="2025-05-13T07:32:40.182928145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 07:32:40.183673 env[1157]: time="2025-05-13T07:32:40.182945848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 07:32:40.183673 env[1157]: time="2025-05-13T07:32:40.182960846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 07:32:40.183673 env[1157]: time="2025-05-13T07:32:40.182974622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 07:32:40.183673 env[1157]: time="2025-05-13T07:32:40.183060193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 07:32:40.183673 env[1157]: time="2025-05-13T07:32:40.183093115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 07:32:40.183673 env[1157]: time="2025-05-13T07:32:40.183109355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 07:32:40.183673 env[1157]: time="2025-05-13T07:32:40.183124153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 07:32:40.183673 env[1157]: time="2025-05-13T07:32:40.183141155Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 07:32:40.184406 env[1157]: time="2025-05-13T07:32:40.183309009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 07:32:40.184406 env[1157]: time="2025-05-13T07:32:40.183955081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 07:32:40.184406 env[1157]: time="2025-05-13T07:32:40.183972895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 07:32:40.184406 env[1157]: time="2025-05-13T07:32:40.184009944Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 07:32:40.184406 env[1157]: time="2025-05-13T07:32:40.184028058Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 13 07:32:40.184406 env[1157]: time="2025-05-13T07:32:40.184040441Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 07:32:40.184406 env[1157]: time="2025-05-13T07:32:40.184062192Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 13 07:32:40.184406 env[1157]: time="2025-05-13T07:32:40.184119339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 07:32:40.184818 env[1157]: time="2025-05-13T07:32:40.184663840Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 07:32:40.184818 env[1157]: time="2025-05-13T07:32:40.184758879Z" level=info msg="Connect containerd service" May 13 07:32:40.188205 env[1157]: time="2025-05-13T07:32:40.184923507Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 07:32:40.188205 env[1157]: time="2025-05-13T07:32:40.185681148Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 07:32:40.188205 env[1157]: time="2025-05-13T07:32:40.186278308Z" level=info msg="Start subscribing containerd event" May 13 07:32:40.188205 env[1157]: time="2025-05-13T07:32:40.186395338Z" level=info msg="Start recovering state" May 13 07:32:40.188205 env[1157]: time="2025-05-13T07:32:40.186492149Z" level=info msg="Start event monitor" May 13 07:32:40.188205 env[1157]: time="2025-05-13T07:32:40.186517858Z" level=info msg="Start snapshots syncer" May 13 07:32:40.188205 env[1157]: time="2025-05-13T07:32:40.186542774Z" level=info msg="Start cni network conf syncer for default" May 13 07:32:40.188205 env[1157]: time="2025-05-13T07:32:40.186560337Z" level=info msg="Start streaming server" May 13 07:32:40.188205 env[1157]: time="2025-05-13T07:32:40.186676385Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 07:32:40.188205 env[1157]: time="2025-05-13T07:32:40.186758399Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 07:32:40.188205 env[1157]: time="2025-05-13T07:32:40.186817610Z" level=info msg="containerd successfully booted in 0.123326s" May 13 07:32:40.186909 systemd[1]: Started containerd.service. May 13 07:32:40.358689 locksmithd[1195]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 07:32:40.967152 sshd_keygen[1165]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 07:32:41.000358 systemd[1]: Finished sshd-keygen.service. May 13 07:32:41.002479 systemd[1]: Starting issuegen.service... May 13 07:32:41.004230 systemd[1]: Started sshd@0-172.24.4.185:22-172.24.4.1:54212.service. May 13 07:32:41.010286 systemd[1]: issuegen.service: Deactivated successfully. May 13 07:32:41.010468 systemd[1]: Finished issuegen.service. May 13 07:32:41.012394 systemd[1]: Starting systemd-user-sessions.service... May 13 07:32:41.023054 systemd[1]: Finished systemd-user-sessions.service. May 13 07:32:41.025234 systemd[1]: Started getty@tty1.service. May 13 07:32:41.027054 systemd[1]: Started serial-getty@ttyS0.service. May 13 07:32:41.027700 systemd[1]: Reached target getty.target. May 13 07:32:42.221160 systemd[1]: Started kubelet.service. May 13 07:32:42.253630 sshd[1211]: Accepted publickey for core from 172.24.4.1 port 54212 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:32:42.259148 sshd[1211]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:32:42.292872 systemd[1]: Created slice user-500.slice. May 13 07:32:42.297179 systemd[1]: Starting user-runtime-dir@500.service... May 13 07:32:42.305542 systemd-logind[1147]: New session 1 of user core. May 13 07:32:42.317607 systemd[1]: Finished user-runtime-dir@500.service. May 13 07:32:42.319738 systemd[1]: Starting user@500.service... May 13 07:32:42.324869 (systemd)[1222]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 07:32:42.427073 systemd[1222]: Queued start job for default target default.target. May 13 07:32:42.427900 systemd[1222]: Reached target paths.target. May 13 07:32:42.428027 systemd[1222]: Reached target sockets.target. May 13 07:32:42.428112 systemd[1222]: Reached target timers.target. May 13 07:32:42.428215 systemd[1222]: Reached target basic.target. May 13 07:32:42.428403 systemd[1]: Started user@500.service. May 13 07:32:42.429754 systemd[1]: Started session-1.scope. May 13 07:32:42.430558 systemd[1222]: Reached target default.target. May 13 07:32:42.430699 systemd[1222]: Startup finished in 99ms. May 13 07:32:42.967617 systemd[1]: Started sshd@1-172.24.4.185:22-172.24.4.1:54220.service. May 13 07:32:44.083904 kubelet[1220]: E0513 07:32:44.083785 1220 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 07:32:44.090154 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 07:32:44.090429 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 07:32:44.090894 systemd[1]: kubelet.service: Consumed 2.226s CPU time. May 13 07:32:44.972861 sshd[1236]: Accepted publickey for core from 172.24.4.1 port 54220 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:32:44.977137 sshd[1236]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:32:44.988670 systemd-logind[1147]: New session 2 of user core. May 13 07:32:44.989454 systemd[1]: Started session-2.scope. May 13 07:32:45.615485 sshd[1236]: pam_unix(sshd:session): session closed for user core May 13 07:32:45.622916 systemd[1]: sshd@1-172.24.4.185:22-172.24.4.1:54220.service: Deactivated successfully. May 13 07:32:45.624337 systemd[1]: session-2.scope: Deactivated successfully. May 13 07:32:45.626069 systemd-logind[1147]: Session 2 logged out. Waiting for processes to exit. May 13 07:32:45.628215 systemd[1]: Started sshd@2-172.24.4.185:22-172.24.4.1:43466.service. May 13 07:32:45.632884 systemd-logind[1147]: Removed session 2. May 13 07:32:47.002526 coreos-metadata[1136]: May 13 07:32:47.002 WARN failed to locate config-drive, using the metadata service API instead May 13 07:32:47.097166 coreos-metadata[1136]: May 13 07:32:47.097 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 13 07:32:47.170767 sshd[1243]: Accepted publickey for core from 172.24.4.1 port 43466 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:32:47.172947 sshd[1243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:32:47.183127 systemd-logind[1147]: New session 3 of user core. May 13 07:32:47.183809 systemd[1]: Started session-3.scope. May 13 07:32:47.295513 coreos-metadata[1136]: May 13 07:32:47.295 INFO Fetch successful May 13 07:32:47.295820 coreos-metadata[1136]: May 13 07:32:47.295 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 13 07:32:47.311134 coreos-metadata[1136]: May 13 07:32:47.311 INFO Fetch successful May 13 07:32:47.321132 unknown[1136]: wrote ssh authorized keys file for user: core May 13 07:32:47.364610 update-ssh-keys[1248]: Updated "/home/core/.ssh/authorized_keys" May 13 07:32:47.366123 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 13 07:32:47.367041 systemd[1]: Reached target multi-user.target. May 13 07:32:47.370109 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 13 07:32:47.386838 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 13 07:32:47.387501 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 13 07:32:47.391099 systemd[1]: Startup finished in 967ms (kernel) + 6.750s (initrd) + 14.252s (userspace) = 21.970s. May 13 07:32:47.829096 sshd[1243]: pam_unix(sshd:session): session closed for user core May 13 07:32:47.834211 systemd-logind[1147]: Session 3 logged out. Waiting for processes to exit. May 13 07:32:47.835440 systemd[1]: sshd@2-172.24.4.185:22-172.24.4.1:43466.service: Deactivated successfully. May 13 07:32:47.837202 systemd[1]: session-3.scope: Deactivated successfully. May 13 07:32:47.838813 systemd-logind[1147]: Removed session 3. May 13 07:32:54.177263 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 07:32:54.177721 systemd[1]: Stopped kubelet.service. May 13 07:32:54.177807 systemd[1]: kubelet.service: Consumed 2.226s CPU time. May 13 07:32:54.182442 systemd[1]: Starting kubelet.service... May 13 07:32:54.417450 systemd[1]: Started kubelet.service. May 13 07:32:54.634206 kubelet[1256]: E0513 07:32:54.633760 1256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 07:32:54.640389 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 07:32:54.640509 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 07:32:57.839423 systemd[1]: Started sshd@3-172.24.4.185:22-172.24.4.1:45864.service. May 13 07:32:59.263126 sshd[1263]: Accepted publickey for core from 172.24.4.1 port 45864 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:32:59.265527 sshd[1263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:32:59.275699 systemd-logind[1147]: New session 4 of user core. May 13 07:32:59.276436 systemd[1]: Started session-4.scope. May 13 07:33:00.045802 sshd[1263]: pam_unix(sshd:session): session closed for user core May 13 07:33:00.052504 systemd[1]: Started sshd@4-172.24.4.185:22-172.24.4.1:45870.service. May 13 07:33:00.057565 systemd[1]: sshd@3-172.24.4.185:22-172.24.4.1:45864.service: Deactivated successfully. May 13 07:33:00.059144 systemd[1]: session-4.scope: Deactivated successfully. May 13 07:33:00.061851 systemd-logind[1147]: Session 4 logged out. Waiting for processes to exit. May 13 07:33:00.064462 systemd-logind[1147]: Removed session 4. May 13 07:33:01.718614 sshd[1268]: Accepted publickey for core from 172.24.4.1 port 45870 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:33:01.722178 sshd[1268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:33:01.732203 systemd-logind[1147]: New session 5 of user core. May 13 07:33:01.732963 systemd[1]: Started session-5.scope. May 13 07:33:02.598103 sshd[1268]: pam_unix(sshd:session): session closed for user core May 13 07:33:02.605596 systemd[1]: Started sshd@5-172.24.4.185:22-172.24.4.1:45884.service. May 13 07:33:02.606624 systemd[1]: sshd@4-172.24.4.185:22-172.24.4.1:45870.service: Deactivated successfully. May 13 07:33:02.608065 systemd[1]: session-5.scope: Deactivated successfully. May 13 07:33:02.610722 systemd-logind[1147]: Session 5 logged out. Waiting for processes to exit. May 13 07:33:02.613897 systemd-logind[1147]: Removed session 5. May 13 07:33:03.862649 sshd[1274]: Accepted publickey for core from 172.24.4.1 port 45884 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:33:03.866017 sshd[1274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:33:03.875962 systemd-logind[1147]: New session 6 of user core. May 13 07:33:03.876613 systemd[1]: Started session-6.scope. May 13 07:33:04.555942 sshd[1274]: pam_unix(sshd:session): session closed for user core May 13 07:33:04.563837 systemd[1]: sshd@5-172.24.4.185:22-172.24.4.1:45884.service: Deactivated successfully. May 13 07:33:04.565217 systemd[1]: session-6.scope: Deactivated successfully. May 13 07:33:04.566883 systemd-logind[1147]: Session 6 logged out. Waiting for processes to exit. May 13 07:33:04.569571 systemd[1]: Started sshd@6-172.24.4.185:22-172.24.4.1:38144.service. May 13 07:33:04.573326 systemd-logind[1147]: Removed session 6. May 13 07:33:04.677179 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 07:33:04.677557 systemd[1]: Stopped kubelet.service. May 13 07:33:04.680629 systemd[1]: Starting kubelet.service... May 13 07:33:04.812017 systemd[1]: Started kubelet.service. May 13 07:33:05.049628 kubelet[1286]: E0513 07:33:05.049536 1286 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 07:33:05.053734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 07:33:05.054106 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 07:33:06.077722 sshd[1281]: Accepted publickey for core from 172.24.4.1 port 38144 ssh2: RSA SHA256:o3qf9BbIw3lN4el61qbJXxqdYr8ihfCj03haQgpGwd0 May 13 07:33:06.081190 sshd[1281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 07:33:06.091179 systemd-logind[1147]: New session 7 of user core. May 13 07:33:06.091926 systemd[1]: Started session-7.scope. May 13 07:33:06.607167 sudo[1294]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 07:33:06.607720 sudo[1294]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 07:33:06.639181 systemd[1]: Starting coreos-metadata.service... May 13 07:33:13.705628 coreos-metadata[1298]: May 13 07:33:13.705 WARN failed to locate config-drive, using the metadata service API instead May 13 07:33:13.797487 coreos-metadata[1298]: May 13 07:33:13.797 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 13 07:33:13.983139 coreos-metadata[1298]: May 13 07:33:13.982 INFO Fetch successful May 13 07:33:13.983360 coreos-metadata[1298]: May 13 07:33:13.983 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 13 07:33:13.996233 coreos-metadata[1298]: May 13 07:33:13.996 INFO Fetch successful May 13 07:33:13.996233 coreos-metadata[1298]: May 13 07:33:13.996 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 13 07:33:14.012970 coreos-metadata[1298]: May 13 07:33:14.012 INFO Fetch successful May 13 07:33:14.012970 coreos-metadata[1298]: May 13 07:33:14.012 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 13 07:33:14.026621 coreos-metadata[1298]: May 13 07:33:14.026 INFO Fetch successful May 13 07:33:14.026621 coreos-metadata[1298]: May 13 07:33:14.026 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 13 07:33:14.042357 coreos-metadata[1298]: May 13 07:33:14.042 INFO Fetch successful May 13 07:33:14.061072 systemd[1]: Finished coreos-metadata.service. May 13 07:33:15.180960 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 07:33:15.182688 systemd[1]: Stopped kubelet.service. May 13 07:33:15.192629 systemd[1]: Starting kubelet.service... May 13 07:33:15.581858 systemd[1]: Started kubelet.service. May 13 07:33:15.680040 kubelet[1326]: E0513 07:33:15.678258 1326 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 07:33:15.682873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 07:33:15.683028 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 07:33:15.899871 systemd[1]: Stopped kubelet.service. May 13 07:33:15.906211 systemd[1]: Starting kubelet.service... May 13 07:33:15.969343 systemd[1]: Reloading. May 13 07:33:16.063077 /usr/lib/systemd/system-generators/torcx-generator[1365]: time="2025-05-13T07:33:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 07:33:16.063133 /usr/lib/systemd/system-generators/torcx-generator[1365]: time="2025-05-13T07:33:16Z" level=info msg="torcx already run" May 13 07:33:16.319578 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 07:33:16.319602 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 07:33:16.342477 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 07:33:16.443804 systemd[1]: Started kubelet.service. May 13 07:33:16.448249 systemd[1]: Stopping kubelet.service... May 13 07:33:16.448824 systemd[1]: kubelet.service: Deactivated successfully. May 13 07:33:16.449153 systemd[1]: Stopped kubelet.service. May 13 07:33:16.451045 systemd[1]: Starting kubelet.service... May 13 07:33:16.557643 systemd[1]: Started kubelet.service. May 13 07:33:16.642338 kubelet[1419]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 07:33:16.642681 kubelet[1419]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 07:33:16.642738 kubelet[1419]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 07:33:16.642958 kubelet[1419]: I0513 07:33:16.642915 1419 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 07:33:17.190663 kubelet[1419]: I0513 07:33:17.190588 1419 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 07:33:17.190663 kubelet[1419]: I0513 07:33:17.190626 1419 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 07:33:17.192622 kubelet[1419]: I0513 07:33:17.192565 1419 server.go:954] "Client rotation is on, will bootstrap in background" May 13 07:33:17.243503 kubelet[1419]: I0513 07:33:17.242598 1419 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 07:33:17.278016 kubelet[1419]: E0513 07:33:17.277902 1419 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 07:33:17.278356 kubelet[1419]: I0513 07:33:17.278323 1419 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 07:33:17.285472 kubelet[1419]: I0513 07:33:17.285428 1419 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 07:33:17.285920 kubelet[1419]: I0513 07:33:17.285887 1419 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 07:33:17.286201 kubelet[1419]: I0513 07:33:17.286005 1419 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.24.4.185","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 07:33:17.286356 kubelet[1419]: I0513 07:33:17.286344 1419 topology_manager.go:138] "Creating topology manager with none policy" May 13 07:33:17.286428 kubelet[1419]: I0513 07:33:17.286419 1419 container_manager_linux.go:304] "Creating device plugin manager" May 13 07:33:17.286605 kubelet[1419]: I0513 07:33:17.286592 1419 state_mem.go:36] "Initialized new in-memory state store" May 13 07:33:17.296277 kubelet[1419]: I0513 07:33:17.296233 1419 kubelet.go:446] "Attempting to sync node with API server" May 13 07:33:17.296487 kubelet[1419]: I0513 07:33:17.296476 1419 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 07:33:17.296567 kubelet[1419]: I0513 07:33:17.296558 1419 kubelet.go:352] "Adding apiserver pod source" May 13 07:33:17.296635 kubelet[1419]: I0513 07:33:17.296626 1419 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 07:33:17.313377 kubelet[1419]: E0513 07:33:17.313259 1419 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:17.313377 kubelet[1419]: E0513 07:33:17.313372 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:17.314530 kubelet[1419]: I0513 07:33:17.314507 1419 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 07:33:17.315092 kubelet[1419]: I0513 07:33:17.315078 1419 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 07:33:17.316968 kubelet[1419]: W0513 07:33:17.316942 1419 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 07:33:17.320359 kubelet[1419]: I0513 07:33:17.320317 1419 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 07:33:17.320596 kubelet[1419]: I0513 07:33:17.320585 1419 server.go:1287] "Started kubelet" May 13 07:33:17.332283 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 13 07:33:17.332724 kubelet[1419]: I0513 07:33:17.332690 1419 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 07:33:17.339670 kubelet[1419]: I0513 07:33:17.339601 1419 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 07:33:17.341909 kubelet[1419]: I0513 07:33:17.341863 1419 server.go:490] "Adding debug handlers to kubelet server" May 13 07:33:17.343792 kubelet[1419]: I0513 07:33:17.343703 1419 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 07:33:17.344113 kubelet[1419]: I0513 07:33:17.344091 1419 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 07:33:17.344257 kubelet[1419]: I0513 07:33:17.344204 1419 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 07:33:17.344629 kubelet[1419]: E0513 07:33:17.344609 1419 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.185\" not found" May 13 07:33:17.344896 kubelet[1419]: I0513 07:33:17.344855 1419 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 07:33:17.345154 kubelet[1419]: I0513 07:33:17.345141 1419 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 07:33:17.345285 kubelet[1419]: I0513 07:33:17.345275 1419 reconciler.go:26] "Reconciler: start to sync state" May 13 07:33:17.347796 kubelet[1419]: I0513 07:33:17.347753 1419 factory.go:221] Registration of the systemd container factory successfully May 13 07:33:17.349143 kubelet[1419]: I0513 07:33:17.347966 1419 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 07:33:17.353323 kubelet[1419]: E0513 07:33:17.353297 1419 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 07:33:17.363553 kubelet[1419]: I0513 07:33:17.363508 1419 factory.go:221] Registration of the containerd container factory successfully May 13 07:33:17.384725 kubelet[1419]: W0513 07:33:17.384681 1419 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 13 07:33:17.384725 kubelet[1419]: E0513 07:33:17.384729 1419 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" May 13 07:33:17.384942 kubelet[1419]: W0513 07:33:17.384781 1419 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 13 07:33:17.384942 kubelet[1419]: E0513 07:33:17.384797 1419 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" May 13 07:33:17.384942 kubelet[1419]: W0513 07:33:17.384862 1419 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.24.4.185" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 13 07:33:17.384942 kubelet[1419]: E0513 07:33:17.384877 1419 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.24.4.185\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" May 13 07:33:17.387247 kubelet[1419]: E0513 07:33:17.384945 1419 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.24.4.185.183f05d60bb41f26 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.24.4.185,UID:172.24.4.185,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.24.4.185,},FirstTimestamp:2025-05-13 07:33:17.320535846 +0000 UTC m=+0.753269831,LastTimestamp:2025-05-13 07:33:17.320535846 +0000 UTC m=+0.753269831,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.24.4.185,}" May 13 07:33:17.387247 kubelet[1419]: E0513 07:33:17.387231 1419 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.185\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 13 07:33:17.391826 kubelet[1419]: I0513 07:33:17.391408 1419 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 07:33:17.391826 kubelet[1419]: I0513 07:33:17.391425 1419 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 07:33:17.391826 kubelet[1419]: I0513 07:33:17.391441 1419 state_mem.go:36] "Initialized new in-memory state store" May 13 07:33:17.406283 kubelet[1419]: I0513 07:33:17.406263 1419 policy_none.go:49] "None policy: Start" May 13 07:33:17.406428 kubelet[1419]: I0513 07:33:17.406416 1419 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 07:33:17.406527 kubelet[1419]: I0513 07:33:17.406516 1419 state_mem.go:35] "Initializing new in-memory state store" May 13 07:33:17.415646 systemd[1]: Created slice kubepods.slice. May 13 07:33:17.421040 systemd[1]: Created slice kubepods-burstable.slice. May 13 07:33:17.425686 systemd[1]: Created slice kubepods-besteffort.slice. May 13 07:33:17.432211 kubelet[1419]: I0513 07:33:17.432188 1419 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 07:33:17.432513 kubelet[1419]: I0513 07:33:17.432503 1419 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 07:33:17.432635 kubelet[1419]: I0513 07:33:17.432601 1419 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 07:33:17.433705 kubelet[1419]: I0513 07:33:17.433553 1419 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 07:33:17.435063 kubelet[1419]: E0513 07:33:17.435048 1419 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 07:33:17.435177 kubelet[1419]: E0513 07:33:17.435164 1419 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.185\" not found" May 13 07:33:17.498040 kubelet[1419]: I0513 07:33:17.496098 1419 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 07:33:17.500208 kubelet[1419]: I0513 07:33:17.500185 1419 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 07:33:17.500312 kubelet[1419]: I0513 07:33:17.500301 1419 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 07:33:17.500409 kubelet[1419]: I0513 07:33:17.500387 1419 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 07:33:17.500483 kubelet[1419]: I0513 07:33:17.500472 1419 kubelet.go:2388] "Starting kubelet main sync loop" May 13 07:33:17.500593 kubelet[1419]: E0513 07:33:17.500577 1419 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 13 07:33:17.534152 kubelet[1419]: I0513 07:33:17.534101 1419 kubelet_node_status.go:76] "Attempting to register node" node="172.24.4.185" May 13 07:33:17.543341 kubelet[1419]: I0513 07:33:17.543307 1419 kubelet_node_status.go:79] "Successfully registered node" node="172.24.4.185" May 13 07:33:17.543423 kubelet[1419]: E0513 07:33:17.543355 1419 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"172.24.4.185\": node \"172.24.4.185\" not found" May 13 07:33:17.554868 kubelet[1419]: E0513 07:33:17.554824 1419 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.185\" not found" May 13 07:33:17.641219 sudo[1294]: pam_unix(sudo:session): session closed for user root May 13 07:33:17.655733 kubelet[1419]: E0513 07:33:17.655682 1419 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.185\" not found" May 13 07:33:17.757425 kubelet[1419]: E0513 07:33:17.757240 1419 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.185\" not found" May 13 07:33:17.858113 kubelet[1419]: E0513 07:33:17.858066 1419 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.185\" not found" May 13 07:33:17.919390 sshd[1281]: pam_unix(sshd:session): session closed for user core May 13 07:33:17.924404 systemd[1]: sshd@6-172.24.4.185:22-172.24.4.1:38144.service: Deactivated successfully. May 13 07:33:17.926052 systemd[1]: session-7.scope: Deactivated successfully. May 13 07:33:17.926340 systemd[1]: session-7.scope: Consumed 1.311s CPU time. May 13 07:33:17.927709 systemd-logind[1147]: Session 7 logged out. Waiting for processes to exit. May 13 07:33:17.930106 systemd-logind[1147]: Removed session 7. May 13 07:33:17.959211 kubelet[1419]: E0513 07:33:17.959138 1419 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.185\" not found" May 13 07:33:18.059845 kubelet[1419]: E0513 07:33:18.059658 1419 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.185\" not found" May 13 07:33:18.160464 kubelet[1419]: E0513 07:33:18.160409 1419 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.185\" not found" May 13 07:33:18.198010 kubelet[1419]: I0513 07:33:18.197943 1419 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 13 07:33:18.198364 kubelet[1419]: W0513 07:33:18.198322 1419 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 13 07:33:18.261094 kubelet[1419]: E0513 07:33:18.261054 1419 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.185\" not found" May 13 07:33:18.314340 kubelet[1419]: E0513 07:33:18.313597 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:18.361592 kubelet[1419]: E0513 07:33:18.361392 1419 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.185\" not found" May 13 07:33:18.462011 kubelet[1419]: E0513 07:33:18.461912 1419 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.185\" not found" May 13 07:33:18.563250 kubelet[1419]: E0513 07:33:18.563185 1419 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.185\" not found" May 13 07:33:18.665257 kubelet[1419]: E0513 07:33:18.664355 1419 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"172.24.4.185\" not found" May 13 07:33:18.767303 kubelet[1419]: I0513 07:33:18.767246 1419 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.2.0/24" May 13 07:33:18.768329 env[1157]: time="2025-05-13T07:33:18.768092460Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 07:33:18.769388 kubelet[1419]: I0513 07:33:18.769353 1419 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.2.0/24" May 13 07:33:19.314709 kubelet[1419]: I0513 07:33:19.314660 1419 apiserver.go:52] "Watching apiserver" May 13 07:33:19.315162 kubelet[1419]: E0513 07:33:19.315130 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:19.334976 systemd[1]: Created slice kubepods-besteffort-poda9a4623f_fb09_4308_8592_69504ff321b5.slice. May 13 07:33:19.356067 kubelet[1419]: I0513 07:33:19.355976 1419 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 07:33:19.356768 kubelet[1419]: I0513 07:33:19.356644 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9a4623f-fb09-4308-8592-69504ff321b5-xtables-lock\") pod \"kube-proxy-t2csn\" (UID: \"a9a4623f-fb09-4308-8592-69504ff321b5\") " pod="kube-system/kube-proxy-t2csn" May 13 07:33:19.357179 kubelet[1419]: I0513 07:33:19.357056 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqlmn\" (UniqueName: \"kubernetes.io/projected/a9a4623f-fb09-4308-8592-69504ff321b5-kube-api-access-gqlmn\") pod \"kube-proxy-t2csn\" (UID: \"a9a4623f-fb09-4308-8592-69504ff321b5\") " pod="kube-system/kube-proxy-t2csn" May 13 07:33:19.359920 systemd[1]: Created slice kubepods-burstable-podcd31ec6e_e37e_4887_86fb_4d9dca1f1e9d.slice. May 13 07:33:19.361272 kubelet[1419]: I0513 07:33:19.361197 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-hostproc\") pod \"cilium-cd5jz\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " pod="kube-system/cilium-cd5jz" May 13 07:33:19.361438 kubelet[1419]: I0513 07:33:19.361286 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-cni-path\") pod \"cilium-cd5jz\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " pod="kube-system/cilium-cd5jz" May 13 07:33:19.361438 kubelet[1419]: I0513 07:33:19.361350 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-cilium-config-path\") pod \"cilium-cd5jz\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " pod="kube-system/cilium-cd5jz" May 13 07:33:19.361438 kubelet[1419]: I0513 07:33:19.361425 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-host-proc-sys-net\") pod \"cilium-cd5jz\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " pod="kube-system/cilium-cd5jz" May 13 07:33:19.361835 kubelet[1419]: I0513 07:33:19.361477 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-hubble-tls\") pod \"cilium-cd5jz\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " pod="kube-system/cilium-cd5jz" May 13 07:33:19.361835 kubelet[1419]: I0513 07:33:19.361519 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-bpf-maps\") pod \"cilium-cd5jz\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " pod="kube-system/cilium-cd5jz" May 13 07:33:19.361835 kubelet[1419]: I0513 07:33:19.361610 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-etc-cni-netd\") pod \"cilium-cd5jz\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " pod="kube-system/cilium-cd5jz" May 13 07:33:19.361835 kubelet[1419]: I0513 07:33:19.361659 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-host-proc-sys-kernel\") pod \"cilium-cd5jz\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " pod="kube-system/cilium-cd5jz" May 13 07:33:19.361835 kubelet[1419]: I0513 07:33:19.361716 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j96qz\" (UniqueName: \"kubernetes.io/projected/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-kube-api-access-j96qz\") pod \"cilium-cd5jz\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " pod="kube-system/cilium-cd5jz" May 13 07:33:19.361835 kubelet[1419]: I0513 07:33:19.361757 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-cilium-cgroup\") pod \"cilium-cd5jz\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " pod="kube-system/cilium-cd5jz" May 13 07:33:19.362281 kubelet[1419]: I0513 07:33:19.361797 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-lib-modules\") pod \"cilium-cd5jz\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " pod="kube-system/cilium-cd5jz" May 13 07:33:19.362281 kubelet[1419]: I0513 07:33:19.361835 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-xtables-lock\") pod \"cilium-cd5jz\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " pod="kube-system/cilium-cd5jz" May 13 07:33:19.362281 kubelet[1419]: I0513 07:33:19.361874 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-clustermesh-secrets\") pod \"cilium-cd5jz\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " pod="kube-system/cilium-cd5jz" May 13 07:33:19.362281 kubelet[1419]: I0513 07:33:19.361916 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a9a4623f-fb09-4308-8592-69504ff321b5-kube-proxy\") pod \"kube-proxy-t2csn\" (UID: \"a9a4623f-fb09-4308-8592-69504ff321b5\") " pod="kube-system/kube-proxy-t2csn" May 13 07:33:19.362281 kubelet[1419]: I0513 07:33:19.361969 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-cilium-run\") pod \"cilium-cd5jz\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " pod="kube-system/cilium-cd5jz" May 13 07:33:19.362281 kubelet[1419]: I0513 07:33:19.362110 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9a4623f-fb09-4308-8592-69504ff321b5-lib-modules\") pod \"kube-proxy-t2csn\" (UID: \"a9a4623f-fb09-4308-8592-69504ff321b5\") " pod="kube-system/kube-proxy-t2csn" May 13 07:33:19.469452 kubelet[1419]: I0513 07:33:19.468950 1419 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 13 07:33:19.666470 env[1157]: time="2025-05-13T07:33:19.658703362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t2csn,Uid:a9a4623f-fb09-4308-8592-69504ff321b5,Namespace:kube-system,Attempt:0,}" May 13 07:33:19.678890 env[1157]: time="2025-05-13T07:33:19.678769224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cd5jz,Uid:cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d,Namespace:kube-system,Attempt:0,}" May 13 07:33:20.316847 kubelet[1419]: E0513 07:33:20.316721 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:20.531403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3430424252.mount: Deactivated successfully. May 13 07:33:20.547610 env[1157]: time="2025-05-13T07:33:20.547439940Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:33:20.551100 env[1157]: time="2025-05-13T07:33:20.551032543Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:33:20.559469 env[1157]: time="2025-05-13T07:33:20.559383837Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:33:20.564297 env[1157]: time="2025-05-13T07:33:20.564208320Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:33:20.571528 env[1157]: time="2025-05-13T07:33:20.570517004Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:33:20.576813 env[1157]: time="2025-05-13T07:33:20.576718663Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:33:20.584011 env[1157]: time="2025-05-13T07:33:20.583855491Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:33:20.593525 env[1157]: time="2025-05-13T07:33:20.593448184Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:33:20.630838 env[1157]: time="2025-05-13T07:33:20.630418728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 07:33:20.630838 env[1157]: time="2025-05-13T07:33:20.630509391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 07:33:20.630838 env[1157]: time="2025-05-13T07:33:20.630544071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 07:33:20.631757 env[1157]: time="2025-05-13T07:33:20.631562239Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5 pid=1472 runtime=io.containerd.runc.v2 May 13 07:33:20.660184 env[1157]: time="2025-05-13T07:33:20.660119203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 07:33:20.660361 env[1157]: time="2025-05-13T07:33:20.660337544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 07:33:20.660517 env[1157]: time="2025-05-13T07:33:20.660494341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 07:33:20.660726 env[1157]: time="2025-05-13T07:33:20.660687281Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/66629885b95359ac3141821f5456f3c4f5568b3c0c01d82c561424a7498a0346 pid=1492 runtime=io.containerd.runc.v2 May 13 07:33:20.675416 systemd[1]: Started cri-containerd-68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5.scope. May 13 07:33:20.695972 systemd[1]: Started cri-containerd-66629885b95359ac3141821f5456f3c4f5568b3c0c01d82c561424a7498a0346.scope. May 13 07:33:20.723473 env[1157]: time="2025-05-13T07:33:20.723437412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cd5jz,Uid:cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\"" May 13 07:33:20.726807 env[1157]: time="2025-05-13T07:33:20.726777836Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 07:33:20.736171 env[1157]: time="2025-05-13T07:33:20.736113620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t2csn,Uid:a9a4623f-fb09-4308-8592-69504ff321b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"66629885b95359ac3141821f5456f3c4f5568b3c0c01d82c561424a7498a0346\"" May 13 07:33:21.317865 kubelet[1419]: E0513 07:33:21.317809 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:22.318454 kubelet[1419]: E0513 07:33:22.318399 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:23.319817 kubelet[1419]: E0513 07:33:23.319681 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:24.320819 kubelet[1419]: E0513 07:33:24.320701 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:25.321745 kubelet[1419]: E0513 07:33:25.321640 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:25.426238 update_engine[1149]: I0513 07:33:25.426097 1149 update_attempter.cc:509] Updating boot flags... May 13 07:33:26.322068 kubelet[1419]: E0513 07:33:26.321970 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:27.322627 kubelet[1419]: E0513 07:33:27.322562 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:28.323119 kubelet[1419]: E0513 07:33:28.323069 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:29.153900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount541223235.mount: Deactivated successfully. May 13 07:33:29.324918 kubelet[1419]: E0513 07:33:29.324865 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:30.325604 kubelet[1419]: E0513 07:33:30.325061 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:31.325719 kubelet[1419]: E0513 07:33:31.325604 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:32.327068 kubelet[1419]: E0513 07:33:32.326965 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:33.327434 kubelet[1419]: E0513 07:33:33.327356 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:33.344585 env[1157]: time="2025-05-13T07:33:33.344491869Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:33:33.348517 env[1157]: time="2025-05-13T07:33:33.348468418Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:33:33.352136 env[1157]: time="2025-05-13T07:33:33.352092414Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:33:33.354940 env[1157]: time="2025-05-13T07:33:33.354852004Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 07:33:33.362027 env[1157]: time="2025-05-13T07:33:33.361372415Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 07:33:33.370223 env[1157]: time="2025-05-13T07:33:33.368508820Z" level=info msg="CreateContainer within sandbox \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 07:33:33.389066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount740950306.mount: Deactivated successfully. May 13 07:33:33.396879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount806697126.mount: Deactivated successfully. May 13 07:33:33.413424 env[1157]: time="2025-05-13T07:33:33.413390301Z" level=info msg="CreateContainer within sandbox \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f77e39ba63bc020e80b47c5c1e6d708f8123384061b9e41e6b00edf0b876d12c\"" May 13 07:33:33.416346 env[1157]: time="2025-05-13T07:33:33.416254634Z" level=info msg="StartContainer for \"f77e39ba63bc020e80b47c5c1e6d708f8123384061b9e41e6b00edf0b876d12c\"" May 13 07:33:33.447506 systemd[1]: Started cri-containerd-f77e39ba63bc020e80b47c5c1e6d708f8123384061b9e41e6b00edf0b876d12c.scope. May 13 07:33:33.514407 systemd[1]: cri-containerd-f77e39ba63bc020e80b47c5c1e6d708f8123384061b9e41e6b00edf0b876d12c.scope: Deactivated successfully. May 13 07:33:33.516301 env[1157]: time="2025-05-13T07:33:33.516257277Z" level=info msg="StartContainer for \"f77e39ba63bc020e80b47c5c1e6d708f8123384061b9e41e6b00edf0b876d12c\" returns successfully" May 13 07:33:34.328283 kubelet[1419]: E0513 07:33:34.328229 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:34.388652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f77e39ba63bc020e80b47c5c1e6d708f8123384061b9e41e6b00edf0b876d12c-rootfs.mount: Deactivated successfully. May 13 07:33:34.787700 env[1157]: time="2025-05-13T07:33:34.787606155Z" level=info msg="shim disconnected" id=f77e39ba63bc020e80b47c5c1e6d708f8123384061b9e41e6b00edf0b876d12c May 13 07:33:34.788553 env[1157]: time="2025-05-13T07:33:34.788504885Z" level=warning msg="cleaning up after shim disconnected" id=f77e39ba63bc020e80b47c5c1e6d708f8123384061b9e41e6b00edf0b876d12c namespace=k8s.io May 13 07:33:34.788723 env[1157]: time="2025-05-13T07:33:34.788688230Z" level=info msg="cleaning up dead shim" May 13 07:33:34.809317 env[1157]: time="2025-05-13T07:33:34.809228510Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:33:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1615 runtime=io.containerd.runc.v2\n" May 13 07:33:35.329790 kubelet[1419]: E0513 07:33:35.329747 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:35.672339 env[1157]: time="2025-05-13T07:33:35.671798698Z" level=info msg="CreateContainer within sandbox \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 07:33:35.779584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4122267731.mount: Deactivated successfully. May 13 07:33:35.784595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount509023224.mount: Deactivated successfully. May 13 07:33:35.807093 env[1157]: time="2025-05-13T07:33:35.806921625Z" level=info msg="CreateContainer within sandbox \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1fec7ec91eefb97985de879cc0869e862b8d88f4bb81039dba75ed0360dbaef4\"" May 13 07:33:35.808838 env[1157]: time="2025-05-13T07:33:35.808764815Z" level=info msg="StartContainer for \"1fec7ec91eefb97985de879cc0869e862b8d88f4bb81039dba75ed0360dbaef4\"" May 13 07:33:35.833871 systemd[1]: Started cri-containerd-1fec7ec91eefb97985de879cc0869e862b8d88f4bb81039dba75ed0360dbaef4.scope. May 13 07:33:35.881871 env[1157]: time="2025-05-13T07:33:35.881817051Z" level=info msg="StartContainer for \"1fec7ec91eefb97985de879cc0869e862b8d88f4bb81039dba75ed0360dbaef4\" returns successfully" May 13 07:33:35.886024 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 07:33:35.886272 systemd[1]: Stopped systemd-sysctl.service. May 13 07:33:35.886449 systemd[1]: Stopping systemd-sysctl.service... May 13 07:33:35.889318 systemd[1]: Starting systemd-sysctl.service... May 13 07:33:35.892665 systemd[1]: cri-containerd-1fec7ec91eefb97985de879cc0869e862b8d88f4bb81039dba75ed0360dbaef4.scope: Deactivated successfully. May 13 07:33:35.907510 systemd[1]: Finished systemd-sysctl.service. May 13 07:33:36.027637 env[1157]: time="2025-05-13T07:33:36.027534172Z" level=info msg="shim disconnected" id=1fec7ec91eefb97985de879cc0869e862b8d88f4bb81039dba75ed0360dbaef4 May 13 07:33:36.028227 env[1157]: time="2025-05-13T07:33:36.028154228Z" level=warning msg="cleaning up after shim disconnected" id=1fec7ec91eefb97985de879cc0869e862b8d88f4bb81039dba75ed0360dbaef4 namespace=k8s.io May 13 07:33:36.028435 env[1157]: time="2025-05-13T07:33:36.028395933Z" level=info msg="cleaning up dead shim" May 13 07:33:36.056607 env[1157]: time="2025-05-13T07:33:36.056519243Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:33:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1679 runtime=io.containerd.runc.v2\n" May 13 07:33:36.330946 kubelet[1419]: E0513 07:33:36.330239 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:36.522402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2524677145.mount: Deactivated successfully. May 13 07:33:36.679778 env[1157]: time="2025-05-13T07:33:36.679710161Z" level=info msg="CreateContainer within sandbox \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 07:33:36.745099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3789355672.mount: Deactivated successfully. May 13 07:33:36.750246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2968243440.mount: Deactivated successfully. May 13 07:33:36.760321 env[1157]: time="2025-05-13T07:33:36.760274794Z" level=info msg="CreateContainer within sandbox \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"10528f53dfa5ea7bdd9539a8ab642aceec1a7607efd25723dbbac6faf407c4f3\"" May 13 07:33:36.761184 env[1157]: time="2025-05-13T07:33:36.761151174Z" level=info msg="StartContainer for \"10528f53dfa5ea7bdd9539a8ab642aceec1a7607efd25723dbbac6faf407c4f3\"" May 13 07:33:36.795488 systemd[1]: Started cri-containerd-10528f53dfa5ea7bdd9539a8ab642aceec1a7607efd25723dbbac6faf407c4f3.scope. May 13 07:33:36.842657 systemd[1]: cri-containerd-10528f53dfa5ea7bdd9539a8ab642aceec1a7607efd25723dbbac6faf407c4f3.scope: Deactivated successfully. May 13 07:33:36.847605 env[1157]: time="2025-05-13T07:33:36.847542649Z" level=info msg="StartContainer for \"10528f53dfa5ea7bdd9539a8ab642aceec1a7607efd25723dbbac6faf407c4f3\" returns successfully" May 13 07:33:37.102730 env[1157]: time="2025-05-13T07:33:37.101377451Z" level=info msg="shim disconnected" id=10528f53dfa5ea7bdd9539a8ab642aceec1a7607efd25723dbbac6faf407c4f3 May 13 07:33:37.102730 env[1157]: time="2025-05-13T07:33:37.101474927Z" level=warning msg="cleaning up after shim disconnected" id=10528f53dfa5ea7bdd9539a8ab642aceec1a7607efd25723dbbac6faf407c4f3 namespace=k8s.io May 13 07:33:37.102730 env[1157]: time="2025-05-13T07:33:37.101500618Z" level=info msg="cleaning up dead shim" May 13 07:33:37.134260 env[1157]: time="2025-05-13T07:33:37.134204890Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:33:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1740 runtime=io.containerd.runc.v2\n" May 13 07:33:37.297722 kubelet[1419]: E0513 07:33:37.297615 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:37.332422 kubelet[1419]: E0513 07:33:37.332359 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:37.435129 env[1157]: time="2025-05-13T07:33:37.434911392Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:33:37.437539 env[1157]: time="2025-05-13T07:33:37.437449195Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:33:37.440119 env[1157]: time="2025-05-13T07:33:37.440061282Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:33:37.443446 env[1157]: time="2025-05-13T07:33:37.443389537Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:33:37.443887 env[1157]: time="2025-05-13T07:33:37.443836155Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 13 07:33:37.448171 env[1157]: time="2025-05-13T07:33:37.448120210Z" level=info msg="CreateContainer within sandbox \"66629885b95359ac3141821f5456f3c4f5568b3c0c01d82c561424a7498a0346\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 07:33:37.481369 env[1157]: time="2025-05-13T07:33:37.481178303Z" level=info msg="CreateContainer within sandbox \"66629885b95359ac3141821f5456f3c4f5568b3c0c01d82c561424a7498a0346\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"93e6dec7bd47e387ceebe53726aaf517c6ba10b4febfc2f380dbf871e8dbe8c8\"" May 13 07:33:37.481976 env[1157]: time="2025-05-13T07:33:37.481825980Z" level=info msg="StartContainer for \"93e6dec7bd47e387ceebe53726aaf517c6ba10b4febfc2f380dbf871e8dbe8c8\"" May 13 07:33:37.515952 systemd[1]: Started cri-containerd-93e6dec7bd47e387ceebe53726aaf517c6ba10b4febfc2f380dbf871e8dbe8c8.scope. May 13 07:33:37.581859 env[1157]: time="2025-05-13T07:33:37.581805601Z" level=info msg="StartContainer for \"93e6dec7bd47e387ceebe53726aaf517c6ba10b4febfc2f380dbf871e8dbe8c8\" returns successfully" May 13 07:33:37.683845 env[1157]: time="2025-05-13T07:33:37.683800160Z" level=info msg="CreateContainer within sandbox \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 07:33:37.708531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2318194922.mount: Deactivated successfully. May 13 07:33:37.720354 env[1157]: time="2025-05-13T07:33:37.717972207Z" level=info msg="CreateContainer within sandbox \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807\"" May 13 07:33:37.725077 env[1157]: time="2025-05-13T07:33:37.725036913Z" level=info msg="StartContainer for \"37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807\"" May 13 07:33:37.754836 systemd[1]: Started cri-containerd-37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807.scope. May 13 07:33:37.788796 kubelet[1419]: I0513 07:33:37.788560 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t2csn" podStartSLOduration=4.079327577 podStartE2EDuration="20.788540453s" podCreationTimestamp="2025-05-13 07:33:17 +0000 UTC" firstStartedPulling="2025-05-13 07:33:20.737313246 +0000 UTC m=+4.170047231" lastFinishedPulling="2025-05-13 07:33:37.446526112 +0000 UTC m=+20.879260107" observedRunningTime="2025-05-13 07:33:37.72070174 +0000 UTC m=+21.153435755" watchObservedRunningTime="2025-05-13 07:33:37.788540453 +0000 UTC m=+21.221274448" May 13 07:33:37.793153 systemd[1]: cri-containerd-37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807.scope: Deactivated successfully. May 13 07:33:37.797702 env[1157]: time="2025-05-13T07:33:37.797657437Z" level=info msg="StartContainer for \"37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807\" returns successfully" May 13 07:33:37.938951 env[1157]: time="2025-05-13T07:33:37.938848812Z" level=info msg="shim disconnected" id=37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807 May 13 07:33:37.939881 env[1157]: time="2025-05-13T07:33:37.939831343Z" level=warning msg="cleaning up after shim disconnected" id=37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807 namespace=k8s.io May 13 07:33:37.940187 env[1157]: time="2025-05-13T07:33:37.940151058Z" level=info msg="cleaning up dead shim" May 13 07:33:37.961316 env[1157]: time="2025-05-13T07:33:37.959799217Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:33:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1896 runtime=io.containerd.runc.v2\n" May 13 07:33:38.333500 kubelet[1419]: E0513 07:33:38.332880 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:38.524564 systemd[1]: run-containerd-runc-k8s.io-37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807-runc.pdqwof.mount: Deactivated successfully. May 13 07:33:38.524792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807-rootfs.mount: Deactivated successfully. May 13 07:33:38.694094 env[1157]: time="2025-05-13T07:33:38.693961246Z" level=info msg="CreateContainer within sandbox \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 07:33:38.725453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2467232652.mount: Deactivated successfully. May 13 07:33:38.744062 env[1157]: time="2025-05-13T07:33:38.743770533Z" level=info msg="CreateContainer within sandbox \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb\"" May 13 07:33:38.746075 env[1157]: time="2025-05-13T07:33:38.745005174Z" level=info msg="StartContainer for \"688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb\"" May 13 07:33:38.772874 systemd[1]: Started cri-containerd-688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb.scope. May 13 07:33:38.843719 env[1157]: time="2025-05-13T07:33:38.843653282Z" level=info msg="StartContainer for \"688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb\" returns successfully" May 13 07:33:38.991780 kubelet[1419]: I0513 07:33:38.991623 1419 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 07:33:39.281032 kernel: Initializing XFRM netlink socket May 13 07:33:39.335382 kubelet[1419]: E0513 07:33:39.335210 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:39.747162 kubelet[1419]: I0513 07:33:39.747033 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cd5jz" podStartSLOduration=10.114364824 podStartE2EDuration="22.746951798s" podCreationTimestamp="2025-05-13 07:33:17 +0000 UTC" firstStartedPulling="2025-05-13 07:33:20.726330051 +0000 UTC m=+4.159064046" lastFinishedPulling="2025-05-13 07:33:33.358917035 +0000 UTC m=+16.791651020" observedRunningTime="2025-05-13 07:33:39.744064048 +0000 UTC m=+23.176798083" watchObservedRunningTime="2025-05-13 07:33:39.746951798 +0000 UTC m=+23.179685823" May 13 07:33:40.336513 kubelet[1419]: E0513 07:33:40.336427 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:41.033466 systemd-networkd[992]: cilium_host: Link UP May 13 07:33:41.039104 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 13 07:33:41.044576 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 13 07:33:41.043504 systemd-networkd[992]: cilium_net: Link UP May 13 07:33:41.045955 systemd-networkd[992]: cilium_net: Gained carrier May 13 07:33:41.046368 systemd-networkd[992]: cilium_host: Gained carrier May 13 07:33:41.170382 systemd-networkd[992]: cilium_vxlan: Link UP May 13 07:33:41.170392 systemd-networkd[992]: cilium_vxlan: Gained carrier May 13 07:33:41.337666 kubelet[1419]: E0513 07:33:41.337461 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:41.446060 kernel: NET: Registered PF_ALG protocol family May 13 07:33:41.762144 systemd-networkd[992]: cilium_host: Gained IPv6LL May 13 07:33:41.954311 systemd-networkd[992]: cilium_net: Gained IPv6LL May 13 07:33:42.293685 systemd-networkd[992]: lxc_health: Link UP May 13 07:33:42.301217 systemd-networkd[992]: lxc_health: Gained carrier May 13 07:33:42.302013 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 07:33:42.339183 kubelet[1419]: E0513 07:33:42.339127 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:43.269568 systemd-networkd[992]: cilium_vxlan: Gained IPv6LL May 13 07:33:43.339947 kubelet[1419]: E0513 07:33:43.339891 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:43.683430 systemd[1]: Created slice kubepods-besteffort-pod776d1e36_8ac7_4ba1_9aa2_aac9f008656e.slice. May 13 07:33:43.766547 kubelet[1419]: I0513 07:33:43.766486 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9jfk\" (UniqueName: \"kubernetes.io/projected/776d1e36-8ac7-4ba1-9aa2-aac9f008656e-kube-api-access-p9jfk\") pod \"nginx-deployment-7fcdb87857-h9gql\" (UID: \"776d1e36-8ac7-4ba1-9aa2-aac9f008656e\") " pod="default/nginx-deployment-7fcdb87857-h9gql" May 13 07:33:43.810975 systemd-networkd[992]: lxc_health: Gained IPv6LL May 13 07:33:43.994612 env[1157]: time="2025-05-13T07:33:43.993643616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-h9gql,Uid:776d1e36-8ac7-4ba1-9aa2-aac9f008656e,Namespace:default,Attempt:0,}" May 13 07:33:44.061613 systemd-networkd[992]: lxc507f3ccf4041: Link UP May 13 07:33:44.072172 kernel: eth0: renamed from tmpb2aa9 May 13 07:33:44.096505 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 07:33:44.096696 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc507f3ccf4041: link becomes ready May 13 07:33:44.097268 systemd-networkd[992]: lxc507f3ccf4041: Gained carrier May 13 07:33:44.342139 kubelet[1419]: E0513 07:33:44.341924 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:45.342219 kubelet[1419]: E0513 07:33:45.342113 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:45.560777 systemd-networkd[992]: lxc507f3ccf4041: Gained IPv6LL May 13 07:33:46.342374 kubelet[1419]: E0513 07:33:46.342295 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:47.345461 kubelet[1419]: E0513 07:33:47.343120 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:47.739759 env[1157]: time="2025-05-13T07:33:47.739683363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 07:33:47.740202 env[1157]: time="2025-05-13T07:33:47.739751853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 07:33:47.740202 env[1157]: time="2025-05-13T07:33:47.739771581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 07:33:47.740202 env[1157]: time="2025-05-13T07:33:47.740046263Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2aa961cb54a51b78200fc66cfdc2042f11a82f0240594ef2c3ab510ea09b189 pid=2482 runtime=io.containerd.runc.v2 May 13 07:33:47.758257 systemd[1]: run-containerd-runc-k8s.io-b2aa961cb54a51b78200fc66cfdc2042f11a82f0240594ef2c3ab510ea09b189-runc.IMsSdx.mount: Deactivated successfully. May 13 07:33:47.762082 systemd[1]: Started cri-containerd-b2aa961cb54a51b78200fc66cfdc2042f11a82f0240594ef2c3ab510ea09b189.scope. May 13 07:33:47.806113 env[1157]: time="2025-05-13T07:33:47.806068417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-h9gql,Uid:776d1e36-8ac7-4ba1-9aa2-aac9f008656e,Namespace:default,Attempt:0,} returns sandbox id \"b2aa961cb54a51b78200fc66cfdc2042f11a82f0240594ef2c3ab510ea09b189\"" May 13 07:33:47.808424 env[1157]: time="2025-05-13T07:33:47.808387707Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 07:33:48.343483 kubelet[1419]: E0513 07:33:48.343425 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:49.344623 kubelet[1419]: E0513 07:33:49.344511 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:50.346740 kubelet[1419]: E0513 07:33:50.346649 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:51.347003 kubelet[1419]: E0513 07:33:51.346897 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:52.003500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2292095808.mount: Deactivated successfully. May 13 07:33:52.348607 kubelet[1419]: E0513 07:33:52.347947 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:53.348611 kubelet[1419]: E0513 07:33:53.348520 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:54.214286 env[1157]: time="2025-05-13T07:33:54.214202737Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:33:54.218625 env[1157]: time="2025-05-13T07:33:54.218569378Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:33:54.223238 env[1157]: time="2025-05-13T07:33:54.223152739Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:33:54.227656 env[1157]: time="2025-05-13T07:33:54.227585544Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:33:54.230167 env[1157]: time="2025-05-13T07:33:54.230033826Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 13 07:33:54.236348 env[1157]: time="2025-05-13T07:33:54.236260476Z" level=info msg="CreateContainer within sandbox \"b2aa961cb54a51b78200fc66cfdc2042f11a82f0240594ef2c3ab510ea09b189\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 13 07:33:54.263957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1847421156.mount: Deactivated successfully. May 13 07:33:54.279307 env[1157]: time="2025-05-13T07:33:54.279169390Z" level=info msg="CreateContainer within sandbox \"b2aa961cb54a51b78200fc66cfdc2042f11a82f0240594ef2c3ab510ea09b189\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"ce4b5ff0be333ef423749b90bf9866ccd27990344c4fefc59252e7866254fb71\"" May 13 07:33:54.281036 env[1157]: time="2025-05-13T07:33:54.280911446Z" level=info msg="StartContainer for \"ce4b5ff0be333ef423749b90bf9866ccd27990344c4fefc59252e7866254fb71\"" May 13 07:33:54.338507 systemd[1]: Started cri-containerd-ce4b5ff0be333ef423749b90bf9866ccd27990344c4fefc59252e7866254fb71.scope. May 13 07:33:54.349611 kubelet[1419]: E0513 07:33:54.349567 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:54.377230 env[1157]: time="2025-05-13T07:33:54.377189654Z" level=info msg="StartContainer for \"ce4b5ff0be333ef423749b90bf9866ccd27990344c4fefc59252e7866254fb71\" returns successfully" May 13 07:33:54.799776 kubelet[1419]: I0513 07:33:54.799657 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-h9gql" podStartSLOduration=5.373397757 podStartE2EDuration="11.799594477s" podCreationTimestamp="2025-05-13 07:33:43 +0000 UTC" firstStartedPulling="2025-05-13 07:33:47.807244673 +0000 UTC m=+31.239978658" lastFinishedPulling="2025-05-13 07:33:54.233441343 +0000 UTC m=+37.666175378" observedRunningTime="2025-05-13 07:33:54.798550823 +0000 UTC m=+38.231284868" watchObservedRunningTime="2025-05-13 07:33:54.799594477 +0000 UTC m=+38.232328512" May 13 07:33:55.349909 kubelet[1419]: E0513 07:33:55.349746 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:56.350938 kubelet[1419]: E0513 07:33:56.350864 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:57.297339 kubelet[1419]: E0513 07:33:57.297261 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:57.352172 kubelet[1419]: E0513 07:33:57.352133 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:58.354069 kubelet[1419]: E0513 07:33:58.353946 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:33:59.356239 kubelet[1419]: E0513 07:33:59.356147 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:00.356907 kubelet[1419]: E0513 07:34:00.356786 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:01.358824 kubelet[1419]: E0513 07:34:01.358732 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:02.359975 kubelet[1419]: E0513 07:34:02.359892 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:03.360436 kubelet[1419]: E0513 07:34:03.360256 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:04.361555 kubelet[1419]: E0513 07:34:04.361443 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:05.362444 kubelet[1419]: E0513 07:34:05.362341 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:05.364251 systemd[1]: Created slice kubepods-besteffort-podd0ffbda4_c254_4914_ac36_cff528ea79c9.slice. May 13 07:34:05.462336 kubelet[1419]: I0513 07:34:05.462172 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d0ffbda4-c254-4914-ac36-cff528ea79c9-data\") pod \"nfs-server-provisioner-0\" (UID: \"d0ffbda4-c254-4914-ac36-cff528ea79c9\") " pod="default/nfs-server-provisioner-0" May 13 07:34:05.463041 kubelet[1419]: I0513 07:34:05.462903 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5kx7\" (UniqueName: \"kubernetes.io/projected/d0ffbda4-c254-4914-ac36-cff528ea79c9-kube-api-access-d5kx7\") pod \"nfs-server-provisioner-0\" (UID: \"d0ffbda4-c254-4914-ac36-cff528ea79c9\") " pod="default/nfs-server-provisioner-0" May 13 07:34:05.670286 env[1157]: time="2025-05-13T07:34:05.669768127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d0ffbda4-c254-4914-ac36-cff528ea79c9,Namespace:default,Attempt:0,}" May 13 07:34:05.794364 systemd-networkd[992]: lxc1634131a03e7: Link UP May 13 07:34:05.809701 kernel: eth0: renamed from tmpcc180 May 13 07:34:05.832788 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 07:34:05.836369 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1634131a03e7: link becomes ready May 13 07:34:05.834550 systemd-networkd[992]: lxc1634131a03e7: Gained carrier May 13 07:34:06.149655 env[1157]: time="2025-05-13T07:34:06.149537354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 07:34:06.149930 env[1157]: time="2025-05-13T07:34:06.149644156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 07:34:06.149930 env[1157]: time="2025-05-13T07:34:06.149677709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 07:34:06.150180 env[1157]: time="2025-05-13T07:34:06.150116905Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc180db9877db700738f543bd3be3bd75937253facea5ffc32a6227231be16df pid=2607 runtime=io.containerd.runc.v2 May 13 07:34:06.182079 systemd[1]: Started cri-containerd-cc180db9877db700738f543bd3be3bd75937253facea5ffc32a6227231be16df.scope. May 13 07:34:06.232383 env[1157]: time="2025-05-13T07:34:06.232310688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d0ffbda4-c254-4914-ac36-cff528ea79c9,Namespace:default,Attempt:0,} returns sandbox id \"cc180db9877db700738f543bd3be3bd75937253facea5ffc32a6227231be16df\"" May 13 07:34:06.234943 env[1157]: time="2025-05-13T07:34:06.234673085Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 13 07:34:06.363398 kubelet[1419]: E0513 07:34:06.363259 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:06.601046 systemd[1]: run-containerd-runc-k8s.io-cc180db9877db700738f543bd3be3bd75937253facea5ffc32a6227231be16df-runc.GbsaLX.mount: Deactivated successfully. May 13 07:34:07.364275 kubelet[1419]: E0513 07:34:07.364186 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:07.698810 systemd-networkd[992]: lxc1634131a03e7: Gained IPv6LL May 13 07:34:08.365343 kubelet[1419]: E0513 07:34:08.365238 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:09.366949 kubelet[1419]: E0513 07:34:09.366304 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:10.368703 kubelet[1419]: E0513 07:34:10.368029 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:10.586339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount892980612.mount: Deactivated successfully. May 13 07:34:11.369366 kubelet[1419]: E0513 07:34:11.369133 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:12.369763 kubelet[1419]: E0513 07:34:12.369605 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:13.370070 kubelet[1419]: E0513 07:34:13.369946 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:13.828473 env[1157]: time="2025-05-13T07:34:13.828191176Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:34:13.833180 env[1157]: time="2025-05-13T07:34:13.833116538Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:34:13.836725 env[1157]: time="2025-05-13T07:34:13.836665200Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:34:13.840254 env[1157]: time="2025-05-13T07:34:13.840182294Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:34:13.842336 env[1157]: time="2025-05-13T07:34:13.842241626Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 13 07:34:13.853480 env[1157]: time="2025-05-13T07:34:13.853400698Z" level=info msg="CreateContainer within sandbox \"cc180db9877db700738f543bd3be3bd75937253facea5ffc32a6227231be16df\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 13 07:34:13.874661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount942565637.mount: Deactivated successfully. May 13 07:34:13.889630 env[1157]: time="2025-05-13T07:34:13.889581036Z" level=info msg="CreateContainer within sandbox \"cc180db9877db700738f543bd3be3bd75937253facea5ffc32a6227231be16df\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"48a9067a9d1d8fe0a28e3c8ee094aaeef2265aad3f0317a53e7ab7e842313fc8\"" May 13 07:34:13.890900 env[1157]: time="2025-05-13T07:34:13.890872385Z" level=info msg="StartContainer for \"48a9067a9d1d8fe0a28e3c8ee094aaeef2265aad3f0317a53e7ab7e842313fc8\"" May 13 07:34:13.941303 systemd[1]: Started cri-containerd-48a9067a9d1d8fe0a28e3c8ee094aaeef2265aad3f0317a53e7ab7e842313fc8.scope. May 13 07:34:13.993701 env[1157]: time="2025-05-13T07:34:13.993634326Z" level=info msg="StartContainer for \"48a9067a9d1d8fe0a28e3c8ee094aaeef2265aad3f0317a53e7ab7e842313fc8\" returns successfully" May 13 07:34:14.372483 kubelet[1419]: E0513 07:34:14.372176 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:14.881167 systemd[1]: run-containerd-runc-k8s.io-48a9067a9d1d8fe0a28e3c8ee094aaeef2265aad3f0317a53e7ab7e842313fc8-runc.84I4bD.mount: Deactivated successfully. May 13 07:34:14.971348 kubelet[1419]: I0513 07:34:14.970908 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.359554016 podStartE2EDuration="9.970627217s" podCreationTimestamp="2025-05-13 07:34:05 +0000 UTC" firstStartedPulling="2025-05-13 07:34:06.234237555 +0000 UTC m=+49.666971540" lastFinishedPulling="2025-05-13 07:34:13.845310706 +0000 UTC m=+57.278044741" observedRunningTime="2025-05-13 07:34:14.967667262 +0000 UTC m=+58.400401297" watchObservedRunningTime="2025-05-13 07:34:14.970627217 +0000 UTC m=+58.403361252" May 13 07:34:15.374521 kubelet[1419]: E0513 07:34:15.374356 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:16.375034 kubelet[1419]: E0513 07:34:16.374872 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:17.297104 kubelet[1419]: E0513 07:34:17.297033 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:17.376161 kubelet[1419]: E0513 07:34:17.376100 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:18.377730 kubelet[1419]: E0513 07:34:18.377644 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:19.378165 kubelet[1419]: E0513 07:34:19.377964 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:20.378425 kubelet[1419]: E0513 07:34:20.378307 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:21.378655 kubelet[1419]: E0513 07:34:21.378543 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:22.379313 kubelet[1419]: E0513 07:34:22.379210 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:23.380496 kubelet[1419]: E0513 07:34:23.380337 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:24.004449 systemd[1]: Created slice kubepods-besteffort-pod01a83781_4fba_4443_b73b_edbfd541be20.slice. May 13 07:34:24.136059 kubelet[1419]: I0513 07:34:24.135962 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqckz\" (UniqueName: \"kubernetes.io/projected/01a83781-4fba-4443-b73b-edbfd541be20-kube-api-access-jqckz\") pod \"test-pod-1\" (UID: \"01a83781-4fba-4443-b73b-edbfd541be20\") " pod="default/test-pod-1" May 13 07:34:24.136489 kubelet[1419]: I0513 07:34:24.136441 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7568ce42-b09d-4972-86ce-1bb814b7bfd0\" (UniqueName: \"kubernetes.io/nfs/01a83781-4fba-4443-b73b-edbfd541be20-pvc-7568ce42-b09d-4972-86ce-1bb814b7bfd0\") pod \"test-pod-1\" (UID: \"01a83781-4fba-4443-b73b-edbfd541be20\") " pod="default/test-pod-1" May 13 07:34:24.331046 kernel: FS-Cache: Loaded May 13 07:34:24.381279 kubelet[1419]: E0513 07:34:24.381216 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:24.391050 kernel: RPC: Registered named UNIX socket transport module. May 13 07:34:24.391269 kernel: RPC: Registered udp transport module. May 13 07:34:24.391408 kernel: RPC: Registered tcp transport module. May 13 07:34:24.391926 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 13 07:34:24.472060 kernel: FS-Cache: Netfs 'nfs' registered for caching May 13 07:34:24.698562 kernel: NFS: Registering the id_resolver key type May 13 07:34:24.699789 kernel: Key type id_resolver registered May 13 07:34:24.701055 kernel: Key type id_legacy registered May 13 07:34:24.773729 nfsidmap[2729]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' May 13 07:34:24.785853 nfsidmap[2730]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' May 13 07:34:24.915075 env[1157]: time="2025-05-13T07:34:24.914910331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:01a83781-4fba-4443-b73b-edbfd541be20,Namespace:default,Attempt:0,}" May 13 07:34:25.033409 systemd-networkd[992]: lxca6e95df451da: Link UP May 13 07:34:25.051151 kernel: eth0: renamed from tmp1dfaa May 13 07:34:25.065010 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 07:34:25.065236 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca6e95df451da: link becomes ready May 13 07:34:25.065757 systemd-networkd[992]: lxca6e95df451da: Gained carrier May 13 07:34:25.382779 kubelet[1419]: E0513 07:34:25.382526 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:25.438078 env[1157]: time="2025-05-13T07:34:25.437267742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 07:34:25.438078 env[1157]: time="2025-05-13T07:34:25.437437787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 07:34:25.438078 env[1157]: time="2025-05-13T07:34:25.437475819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 07:34:25.438745 env[1157]: time="2025-05-13T07:34:25.438361219Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1dfaa8d23b013d834b58a592a1891e33f7804bccf628354a465edb8a59418322 pid=2758 runtime=io.containerd.runc.v2 May 13 07:34:25.489994 systemd[1]: run-containerd-runc-k8s.io-1dfaa8d23b013d834b58a592a1891e33f7804bccf628354a465edb8a59418322-runc.HHzSxo.mount: Deactivated successfully. May 13 07:34:25.509457 systemd[1]: Started cri-containerd-1dfaa8d23b013d834b58a592a1891e33f7804bccf628354a465edb8a59418322.scope. May 13 07:34:25.551242 env[1157]: time="2025-05-13T07:34:25.551150388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:01a83781-4fba-4443-b73b-edbfd541be20,Namespace:default,Attempt:0,} returns sandbox id \"1dfaa8d23b013d834b58a592a1891e33f7804bccf628354a465edb8a59418322\"" May 13 07:34:25.553551 env[1157]: time="2025-05-13T07:34:25.553523279Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 07:34:26.051773 env[1157]: time="2025-05-13T07:34:26.051645007Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:34:26.056777 env[1157]: time="2025-05-13T07:34:26.056692210Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:34:26.061876 env[1157]: time="2025-05-13T07:34:26.061815308Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:34:26.067303 env[1157]: time="2025-05-13T07:34:26.067238137Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:34:26.069714 env[1157]: time="2025-05-13T07:34:26.069607790Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 13 07:34:26.079454 env[1157]: time="2025-05-13T07:34:26.079356867Z" level=info msg="CreateContainer within sandbox \"1dfaa8d23b013d834b58a592a1891e33f7804bccf628354a465edb8a59418322\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 13 07:34:26.118907 env[1157]: time="2025-05-13T07:34:26.118804584Z" level=info msg="CreateContainer within sandbox \"1dfaa8d23b013d834b58a592a1891e33f7804bccf628354a465edb8a59418322\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"8e64b90965e89b23dd1faf9c7f8839d5b00629b55a4b4ee6a8c682aeaee7b28d\"" May 13 07:34:26.121252 env[1157]: time="2025-05-13T07:34:26.121106028Z" level=info msg="StartContainer for \"8e64b90965e89b23dd1faf9c7f8839d5b00629b55a4b4ee6a8c682aeaee7b28d\"" May 13 07:34:26.161808 systemd[1]: Started cri-containerd-8e64b90965e89b23dd1faf9c7f8839d5b00629b55a4b4ee6a8c682aeaee7b28d.scope. May 13 07:34:26.222968 env[1157]: time="2025-05-13T07:34:26.222915126Z" level=info msg="StartContainer for \"8e64b90965e89b23dd1faf9c7f8839d5b00629b55a4b4ee6a8c682aeaee7b28d\" returns successfully" May 13 07:34:26.245195 systemd-networkd[992]: lxca6e95df451da: Gained IPv6LL May 13 07:34:26.386198 kubelet[1419]: E0513 07:34:26.385935 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:26.449130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1572255418.mount: Deactivated successfully. May 13 07:34:27.015647 kubelet[1419]: I0513 07:34:27.012657 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.491218449 podStartE2EDuration="20.012584741s" podCreationTimestamp="2025-05-13 07:34:07 +0000 UTC" firstStartedPulling="2025-05-13 07:34:25.552449599 +0000 UTC m=+68.985183584" lastFinishedPulling="2025-05-13 07:34:26.073815841 +0000 UTC m=+69.506549876" observedRunningTime="2025-05-13 07:34:27.012211529 +0000 UTC m=+70.444945565" watchObservedRunningTime="2025-05-13 07:34:27.012584741 +0000 UTC m=+70.445318777" May 13 07:34:27.387579 kubelet[1419]: E0513 07:34:27.386684 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:28.389513 kubelet[1419]: E0513 07:34:28.389354 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:29.390402 kubelet[1419]: E0513 07:34:29.390340 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:30.391315 kubelet[1419]: E0513 07:34:30.391249 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:31.392237 kubelet[1419]: E0513 07:34:31.392046 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:32.392489 kubelet[1419]: E0513 07:34:32.392419 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:33.393806 kubelet[1419]: E0513 07:34:33.393731 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:34.394102 kubelet[1419]: E0513 07:34:34.393943 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:34.597590 env[1157]: time="2025-05-13T07:34:34.597404804Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 07:34:34.615572 env[1157]: time="2025-05-13T07:34:34.615435632Z" level=info msg="StopContainer for \"688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb\" with timeout 2 (s)" May 13 07:34:34.616442 env[1157]: time="2025-05-13T07:34:34.616377423Z" level=info msg="Stop container \"688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb\" with signal terminated" May 13 07:34:34.635188 systemd-networkd[992]: lxc_health: Link DOWN May 13 07:34:34.635206 systemd-networkd[992]: lxc_health: Lost carrier May 13 07:34:34.687618 systemd[1]: cri-containerd-688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb.scope: Deactivated successfully. May 13 07:34:34.688053 systemd[1]: cri-containerd-688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb.scope: Consumed 8.738s CPU time. May 13 07:34:34.708953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb-rootfs.mount: Deactivated successfully. May 13 07:34:35.319871 env[1157]: time="2025-05-13T07:34:35.319731738Z" level=info msg="shim disconnected" id=688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb May 13 07:34:35.319871 env[1157]: time="2025-05-13T07:34:35.319828712Z" level=warning msg="cleaning up after shim disconnected" id=688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb namespace=k8s.io May 13 07:34:35.319871 env[1157]: time="2025-05-13T07:34:35.319853049Z" level=info msg="cleaning up dead shim" May 13 07:34:35.342327 env[1157]: time="2025-05-13T07:34:35.342237362Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:34:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2886 runtime=io.containerd.runc.v2\n" May 13 07:34:35.348821 env[1157]: time="2025-05-13T07:34:35.348746670Z" level=info msg="StopContainer for \"688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb\" returns successfully" May 13 07:34:35.351095 env[1157]: time="2025-05-13T07:34:35.350975528Z" level=info msg="StopPodSandbox for \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\"" May 13 07:34:35.351283 env[1157]: time="2025-05-13T07:34:35.351188494Z" level=info msg="Container to stop \"688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 07:34:35.351283 env[1157]: time="2025-05-13T07:34:35.351241925Z" level=info msg="Container to stop \"f77e39ba63bc020e80b47c5c1e6d708f8123384061b9e41e6b00edf0b876d12c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 07:34:35.351487 env[1157]: time="2025-05-13T07:34:35.351289315Z" level=info msg="Container to stop \"1fec7ec91eefb97985de879cc0869e862b8d88f4bb81039dba75ed0360dbaef4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 07:34:35.351487 env[1157]: time="2025-05-13T07:34:35.351322699Z" level=info msg="Container to stop \"10528f53dfa5ea7bdd9539a8ab642aceec1a7607efd25723dbbac6faf407c4f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 07:34:35.351487 env[1157]: time="2025-05-13T07:34:35.351354619Z" level=info msg="Container to stop \"37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 07:34:35.355848 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5-shm.mount: Deactivated successfully. May 13 07:34:35.372553 systemd[1]: cri-containerd-68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5.scope: Deactivated successfully. May 13 07:34:35.395484 kubelet[1419]: E0513 07:34:35.395282 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:35.439511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5-rootfs.mount: Deactivated successfully. May 13 07:34:35.447698 env[1157]: time="2025-05-13T07:34:35.447643935Z" level=info msg="shim disconnected" id=68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5 May 13 07:34:35.447910 env[1157]: time="2025-05-13T07:34:35.447888630Z" level=warning msg="cleaning up after shim disconnected" id=68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5 namespace=k8s.io May 13 07:34:35.448007 env[1157]: time="2025-05-13T07:34:35.447972541Z" level=info msg="cleaning up dead shim" May 13 07:34:35.463560 env[1157]: time="2025-05-13T07:34:35.463471833Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:34:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2917 runtime=io.containerd.runc.v2\n" May 13 07:34:35.464323 env[1157]: time="2025-05-13T07:34:35.464246556Z" level=info msg="TearDown network for sandbox \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\" successfully" May 13 07:34:35.464388 env[1157]: time="2025-05-13T07:34:35.464325486Z" level=info msg="StopPodSandbox for \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\" returns successfully" May 13 07:34:35.540724 kubelet[1419]: I0513 07:34:35.540639 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-hubble-tls\") pod \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " May 13 07:34:35.541102 kubelet[1419]: I0513 07:34:35.540729 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-cilium-run\") pod \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " May 13 07:34:35.541102 kubelet[1419]: I0513 07:34:35.540802 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-host-proc-sys-kernel\") pod \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " May 13 07:34:35.541102 kubelet[1419]: I0513 07:34:35.540859 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-cni-path\") pod \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " May 13 07:34:35.541102 kubelet[1419]: I0513 07:34:35.540897 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-xtables-lock\") pod \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " May 13 07:34:35.541102 kubelet[1419]: I0513 07:34:35.541015 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-cilium-cgroup\") pod \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " May 13 07:34:35.541534 kubelet[1419]: I0513 07:34:35.541102 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-cilium-config-path\") pod \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " May 13 07:34:35.541534 kubelet[1419]: I0513 07:34:35.541143 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-host-proc-sys-net\") pod \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " May 13 07:34:35.541534 kubelet[1419]: I0513 07:34:35.541207 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-bpf-maps\") pod \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " May 13 07:34:35.541534 kubelet[1419]: I0513 07:34:35.541247 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-etc-cni-netd\") pod \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " May 13 07:34:35.541534 kubelet[1419]: I0513 07:34:35.541301 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-lib-modules\") pod \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " May 13 07:34:35.541534 kubelet[1419]: I0513 07:34:35.541354 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-hostproc\") pod \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " May 13 07:34:35.542068 kubelet[1419]: I0513 07:34:35.541437 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j96qz\" (UniqueName: \"kubernetes.io/projected/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-kube-api-access-j96qz\") pod \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " May 13 07:34:35.542068 kubelet[1419]: I0513 07:34:35.541538 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-clustermesh-secrets\") pod \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\" (UID: \"cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d\") " May 13 07:34:35.544479 kubelet[1419]: I0513 07:34:35.544350 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d" (UID: "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:35.544912 kubelet[1419]: I0513 07:34:35.544832 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d" (UID: "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:35.545327 kubelet[1419]: I0513 07:34:35.545286 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d" (UID: "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:35.545657 kubelet[1419]: I0513 07:34:35.545617 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d" (UID: "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:35.546074 kubelet[1419]: I0513 07:34:35.545962 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-hostproc" (OuterVolumeSpecName: "hostproc") pod "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d" (UID: "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:35.548543 kubelet[1419]: I0513 07:34:35.548469 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-cni-path" (OuterVolumeSpecName: "cni-path") pod "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d" (UID: "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:35.548814 kubelet[1419]: I0513 07:34:35.548774 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d" (UID: "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:35.549121 kubelet[1419]: I0513 07:34:35.549078 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d" (UID: "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:35.553364 systemd[1]: var-lib-kubelet-pods-cd31ec6e\x2de37e\x2d4887\x2d86fb\x2d4d9dca1f1e9d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 07:34:35.554732 kubelet[1419]: I0513 07:34:35.554655 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d" (UID: "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:35.556172 kubelet[1419]: I0513 07:34:35.556117 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d" (UID: "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 07:34:35.556565 kubelet[1419]: I0513 07:34:35.556522 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d" (UID: "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:35.564628 systemd[1]: var-lib-kubelet-pods-cd31ec6e\x2de37e\x2d4887\x2d86fb\x2d4d9dca1f1e9d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 07:34:35.568215 kubelet[1419]: I0513 07:34:35.568112 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d" (UID: "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 07:34:35.573385 kubelet[1419]: I0513 07:34:35.570678 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d" (UID: "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 07:34:35.581469 systemd[1]: var-lib-kubelet-pods-cd31ec6e\x2de37e\x2d4887\x2d86fb\x2d4d9dca1f1e9d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj96qz.mount: Deactivated successfully. May 13 07:34:35.584297 kubelet[1419]: I0513 07:34:35.584230 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-kube-api-access-j96qz" (OuterVolumeSpecName: "kube-api-access-j96qz") pod "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d" (UID: "cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d"). InnerVolumeSpecName "kube-api-access-j96qz". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 07:34:35.648541 kubelet[1419]: I0513 07:34:35.648441 1419 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-etc-cni-netd\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:35.648832 kubelet[1419]: I0513 07:34:35.648799 1419 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-lib-modules\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:35.649144 kubelet[1419]: I0513 07:34:35.649067 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-cilium-config-path\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:35.649363 kubelet[1419]: I0513 07:34:35.649332 1419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-host-proc-sys-net\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:35.649573 kubelet[1419]: I0513 07:34:35.649541 1419 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-bpf-maps\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:35.649756 kubelet[1419]: I0513 07:34:35.649728 1419 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-hostproc\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:35.649975 kubelet[1419]: I0513 07:34:35.649943 1419 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j96qz\" (UniqueName: \"kubernetes.io/projected/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-kube-api-access-j96qz\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:35.650263 kubelet[1419]: I0513 07:34:35.650230 1419 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-clustermesh-secrets\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:35.650461 kubelet[1419]: I0513 07:34:35.650430 1419 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-hubble-tls\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:35.650701 kubelet[1419]: I0513 07:34:35.650669 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-cilium-run\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:35.650907 kubelet[1419]: I0513 07:34:35.650877 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-cilium-cgroup\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:35.651158 kubelet[1419]: I0513 07:34:35.651125 1419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-host-proc-sys-kernel\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:35.651393 kubelet[1419]: I0513 07:34:35.651362 1419 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-cni-path\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:35.651679 kubelet[1419]: I0513 07:34:35.651638 1419 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d-xtables-lock\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:36.028825 kubelet[1419]: I0513 07:34:36.028762 1419 scope.go:117] "RemoveContainer" containerID="688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb" May 13 07:34:36.034255 env[1157]: time="2025-05-13T07:34:36.034159824Z" level=info msg="RemoveContainer for \"688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb\"" May 13 07:34:36.041654 env[1157]: time="2025-05-13T07:34:36.041557907Z" level=info msg="RemoveContainer for \"688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb\" returns successfully" May 13 07:34:36.042389 kubelet[1419]: I0513 07:34:36.042330 1419 scope.go:117] "RemoveContainer" containerID="37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807" May 13 07:34:36.046024 systemd[1]: Removed slice kubepods-burstable-podcd31ec6e_e37e_4887_86fb_4d9dca1f1e9d.slice. May 13 07:34:36.046260 systemd[1]: kubepods-burstable-podcd31ec6e_e37e_4887_86fb_4d9dca1f1e9d.slice: Consumed 8.866s CPU time. May 13 07:34:36.052081 env[1157]: time="2025-05-13T07:34:36.051969749Z" level=info msg="RemoveContainer for \"37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807\"" May 13 07:34:36.058457 env[1157]: time="2025-05-13T07:34:36.058373391Z" level=info msg="RemoveContainer for \"37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807\" returns successfully" May 13 07:34:36.058827 kubelet[1419]: I0513 07:34:36.058764 1419 scope.go:117] "RemoveContainer" containerID="10528f53dfa5ea7bdd9539a8ab642aceec1a7607efd25723dbbac6faf407c4f3" May 13 07:34:36.067542 env[1157]: time="2025-05-13T07:34:36.067436619Z" level=info msg="RemoveContainer for \"10528f53dfa5ea7bdd9539a8ab642aceec1a7607efd25723dbbac6faf407c4f3\"" May 13 07:34:36.091154 env[1157]: time="2025-05-13T07:34:36.091036228Z" level=info msg="RemoveContainer for \"10528f53dfa5ea7bdd9539a8ab642aceec1a7607efd25723dbbac6faf407c4f3\" returns successfully" May 13 07:34:36.091678 kubelet[1419]: I0513 07:34:36.091617 1419 scope.go:117] "RemoveContainer" containerID="1fec7ec91eefb97985de879cc0869e862b8d88f4bb81039dba75ed0360dbaef4" May 13 07:34:36.098106 env[1157]: time="2025-05-13T07:34:36.098035974Z" level=info msg="RemoveContainer for \"1fec7ec91eefb97985de879cc0869e862b8d88f4bb81039dba75ed0360dbaef4\"" May 13 07:34:36.106864 env[1157]: time="2025-05-13T07:34:36.106781107Z" level=info msg="RemoveContainer for \"1fec7ec91eefb97985de879cc0869e862b8d88f4bb81039dba75ed0360dbaef4\" returns successfully" May 13 07:34:36.109514 kubelet[1419]: I0513 07:34:36.109436 1419 scope.go:117] "RemoveContainer" containerID="f77e39ba63bc020e80b47c5c1e6d708f8123384061b9e41e6b00edf0b876d12c" May 13 07:34:36.112817 env[1157]: time="2025-05-13T07:34:36.112744653Z" level=info msg="RemoveContainer for \"f77e39ba63bc020e80b47c5c1e6d708f8123384061b9e41e6b00edf0b876d12c\"" May 13 07:34:36.120414 env[1157]: time="2025-05-13T07:34:36.120355580Z" level=info msg="RemoveContainer for \"f77e39ba63bc020e80b47c5c1e6d708f8123384061b9e41e6b00edf0b876d12c\" returns successfully" May 13 07:34:36.120832 kubelet[1419]: I0513 07:34:36.120802 1419 scope.go:117] "RemoveContainer" containerID="688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb" May 13 07:34:36.121169 env[1157]: time="2025-05-13T07:34:36.121061322Z" level=error msg="ContainerStatus for \"688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb\": not found" May 13 07:34:36.121679 kubelet[1419]: E0513 07:34:36.121625 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb\": not found" containerID="688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb" May 13 07:34:36.122128 kubelet[1419]: I0513 07:34:36.121926 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb"} err="failed to get container status \"688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb\": rpc error: code = NotFound desc = an error occurred when try to find container \"688041c29a1dc9f8d7b07298901f4cbd8bc83f7aa08722119a963dfdfcba3ddb\": not found" May 13 07:34:36.122334 kubelet[1419]: I0513 07:34:36.122308 1419 scope.go:117] "RemoveContainer" containerID="37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807" May 13 07:34:36.122908 env[1157]: time="2025-05-13T07:34:36.122838339Z" level=error msg="ContainerStatus for \"37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807\": not found" May 13 07:34:36.123109 kubelet[1419]: E0513 07:34:36.123026 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807\": not found" containerID="37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807" May 13 07:34:36.123109 kubelet[1419]: I0513 07:34:36.123056 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807"} err="failed to get container status \"37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807\": rpc error: code = NotFound desc = an error occurred when try to find container \"37146be34557270869ae360f4068fa56a5b7a0d35b1f1129e5df1f6e6baf1807\": not found" May 13 07:34:36.123109 kubelet[1419]: I0513 07:34:36.123079 1419 scope.go:117] "RemoveContainer" containerID="10528f53dfa5ea7bdd9539a8ab642aceec1a7607efd25723dbbac6faf407c4f3" May 13 07:34:36.123349 env[1157]: time="2025-05-13T07:34:36.123238289Z" level=error msg="ContainerStatus for \"10528f53dfa5ea7bdd9539a8ab642aceec1a7607efd25723dbbac6faf407c4f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"10528f53dfa5ea7bdd9539a8ab642aceec1a7607efd25723dbbac6faf407c4f3\": not found" May 13 07:34:36.123439 kubelet[1419]: E0513 07:34:36.123376 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"10528f53dfa5ea7bdd9539a8ab642aceec1a7607efd25723dbbac6faf407c4f3\": not found" containerID="10528f53dfa5ea7bdd9539a8ab642aceec1a7607efd25723dbbac6faf407c4f3" May 13 07:34:36.123439 kubelet[1419]: I0513 07:34:36.123397 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"10528f53dfa5ea7bdd9539a8ab642aceec1a7607efd25723dbbac6faf407c4f3"} err="failed to get container status \"10528f53dfa5ea7bdd9539a8ab642aceec1a7607efd25723dbbac6faf407c4f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"10528f53dfa5ea7bdd9539a8ab642aceec1a7607efd25723dbbac6faf407c4f3\": not found" May 13 07:34:36.123439 kubelet[1419]: I0513 07:34:36.123413 1419 scope.go:117] "RemoveContainer" containerID="1fec7ec91eefb97985de879cc0869e862b8d88f4bb81039dba75ed0360dbaef4" May 13 07:34:36.123591 env[1157]: time="2025-05-13T07:34:36.123545343Z" level=error msg="ContainerStatus for \"1fec7ec91eefb97985de879cc0869e862b8d88f4bb81039dba75ed0360dbaef4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1fec7ec91eefb97985de879cc0869e862b8d88f4bb81039dba75ed0360dbaef4\": not found" May 13 07:34:36.123694 kubelet[1419]: E0513 07:34:36.123661 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1fec7ec91eefb97985de879cc0869e862b8d88f4bb81039dba75ed0360dbaef4\": not found" containerID="1fec7ec91eefb97985de879cc0869e862b8d88f4bb81039dba75ed0360dbaef4" May 13 07:34:36.123694 kubelet[1419]: I0513 07:34:36.123686 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1fec7ec91eefb97985de879cc0869e862b8d88f4bb81039dba75ed0360dbaef4"} err="failed to get container status \"1fec7ec91eefb97985de879cc0869e862b8d88f4bb81039dba75ed0360dbaef4\": rpc error: code = NotFound desc = an error occurred when try to find container \"1fec7ec91eefb97985de879cc0869e862b8d88f4bb81039dba75ed0360dbaef4\": not found" May 13 07:34:36.123871 kubelet[1419]: I0513 07:34:36.123702 1419 scope.go:117] "RemoveContainer" containerID="f77e39ba63bc020e80b47c5c1e6d708f8123384061b9e41e6b00edf0b876d12c" May 13 07:34:36.123969 env[1157]: time="2025-05-13T07:34:36.123914244Z" level=error msg="ContainerStatus for \"f77e39ba63bc020e80b47c5c1e6d708f8123384061b9e41e6b00edf0b876d12c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f77e39ba63bc020e80b47c5c1e6d708f8123384061b9e41e6b00edf0b876d12c\": not found" May 13 07:34:36.124095 kubelet[1419]: E0513 07:34:36.124046 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f77e39ba63bc020e80b47c5c1e6d708f8123384061b9e41e6b00edf0b876d12c\": not found" containerID="f77e39ba63bc020e80b47c5c1e6d708f8123384061b9e41e6b00edf0b876d12c" May 13 07:34:36.124095 kubelet[1419]: I0513 07:34:36.124074 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f77e39ba63bc020e80b47c5c1e6d708f8123384061b9e41e6b00edf0b876d12c"} err="failed to get container status \"f77e39ba63bc020e80b47c5c1e6d708f8123384061b9e41e6b00edf0b876d12c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f77e39ba63bc020e80b47c5c1e6d708f8123384061b9e41e6b00edf0b876d12c\": not found" May 13 07:34:36.396631 kubelet[1419]: E0513 07:34:36.396373 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:37.297302 kubelet[1419]: E0513 07:34:37.297238 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:37.398462 kubelet[1419]: E0513 07:34:37.398353 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:37.462542 kubelet[1419]: E0513 07:34:37.462435 1419 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 07:34:37.506562 kubelet[1419]: I0513 07:34:37.506475 1419 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d" path="/var/lib/kubelet/pods/cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d/volumes" May 13 07:34:38.398735 kubelet[1419]: E0513 07:34:38.398669 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:38.935538 kubelet[1419]: I0513 07:34:38.935446 1419 setters.go:602] "Node became not ready" node="172.24.4.185" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T07:34:38Z","lastTransitionTime":"2025-05-13T07:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 07:34:39.112247 kubelet[1419]: I0513 07:34:39.112146 1419 memory_manager.go:355] "RemoveStaleState removing state" podUID="cd31ec6e-e37e-4887-86fb-4d9dca1f1e9d" containerName="cilium-agent" May 13 07:34:39.126835 systemd[1]: Created slice kubepods-besteffort-poda2e083c9_9496_4bae_8a15_47ba52acf890.slice. May 13 07:34:39.130627 kubelet[1419]: W0513 07:34:39.130400 1419 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.24.4.185" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.185' and this object May 13 07:34:39.133880 kubelet[1419]: E0513 07:34:39.133772 1419 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:172.24.4.185\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172.24.4.185' and this object" logger="UnhandledError" May 13 07:34:39.134260 kubelet[1419]: I0513 07:34:39.131045 1419 status_manager.go:890] "Failed to get status for pod" podUID="a2e083c9-9496-4bae-8a15-47ba52acf890" pod="kube-system/cilium-operator-6c4d7847fc-8c7xx" err="pods \"cilium-operator-6c4d7847fc-8c7xx\" is forbidden: User \"system:node:172.24.4.185\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172.24.4.185' and this object" May 13 07:34:39.162780 systemd[1]: Created slice kubepods-burstable-poddab432ae_82fd_4ff5_a56a_a79ec748d7c2.slice. May 13 07:34:39.276707 kubelet[1419]: I0513 07:34:39.276502 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-cni-path\") pod \"cilium-c5lft\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " pod="kube-system/cilium-c5lft" May 13 07:34:39.277244 kubelet[1419]: I0513 07:34:39.277198 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-clustermesh-secrets\") pod \"cilium-c5lft\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " pod="kube-system/cilium-c5lft" May 13 07:34:39.277530 kubelet[1419]: I0513 07:34:39.277461 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-cilium-config-path\") pod \"cilium-c5lft\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " pod="kube-system/cilium-c5lft" May 13 07:34:39.277773 kubelet[1419]: I0513 07:34:39.277733 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-xtables-lock\") pod \"cilium-c5lft\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " pod="kube-system/cilium-c5lft" May 13 07:34:39.278030 kubelet[1419]: I0513 07:34:39.277949 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts2dx\" (UniqueName: \"kubernetes.io/projected/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-kube-api-access-ts2dx\") pod \"cilium-c5lft\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " pod="kube-system/cilium-c5lft" May 13 07:34:39.278326 kubelet[1419]: I0513 07:34:39.278249 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-cilium-run\") pod \"cilium-c5lft\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " pod="kube-system/cilium-c5lft" May 13 07:34:39.278668 kubelet[1419]: I0513 07:34:39.278626 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-hostproc\") pod \"cilium-c5lft\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " pod="kube-system/cilium-c5lft" May 13 07:34:39.278974 kubelet[1419]: I0513 07:34:39.278930 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-cilium-cgroup\") pod \"cilium-c5lft\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " pod="kube-system/cilium-c5lft" May 13 07:34:39.279307 kubelet[1419]: I0513 07:34:39.279266 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-etc-cni-netd\") pod \"cilium-c5lft\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " pod="kube-system/cilium-c5lft" May 13 07:34:39.279585 kubelet[1419]: I0513 07:34:39.279543 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-cilium-ipsec-secrets\") pod \"cilium-c5lft\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " pod="kube-system/cilium-c5lft" May 13 07:34:39.279876 kubelet[1419]: I0513 07:34:39.279817 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2e083c9-9496-4bae-8a15-47ba52acf890-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-8c7xx\" (UID: \"a2e083c9-9496-4bae-8a15-47ba52acf890\") " pod="kube-system/cilium-operator-6c4d7847fc-8c7xx" May 13 07:34:39.280205 kubelet[1419]: I0513 07:34:39.280161 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6txz6\" (UniqueName: \"kubernetes.io/projected/a2e083c9-9496-4bae-8a15-47ba52acf890-kube-api-access-6txz6\") pod \"cilium-operator-6c4d7847fc-8c7xx\" (UID: \"a2e083c9-9496-4bae-8a15-47ba52acf890\") " pod="kube-system/cilium-operator-6c4d7847fc-8c7xx" May 13 07:34:39.280512 kubelet[1419]: I0513 07:34:39.280471 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-bpf-maps\") pod \"cilium-c5lft\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " pod="kube-system/cilium-c5lft" May 13 07:34:39.280788 kubelet[1419]: I0513 07:34:39.280717 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-lib-modules\") pod \"cilium-c5lft\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " pod="kube-system/cilium-c5lft" May 13 07:34:39.281116 kubelet[1419]: I0513 07:34:39.281071 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-host-proc-sys-net\") pod \"cilium-c5lft\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " pod="kube-system/cilium-c5lft" May 13 07:34:39.281419 kubelet[1419]: I0513 07:34:39.281379 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-hubble-tls\") pod \"cilium-c5lft\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " pod="kube-system/cilium-c5lft" May 13 07:34:39.281698 kubelet[1419]: I0513 07:34:39.281657 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-host-proc-sys-kernel\") pod \"cilium-c5lft\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " pod="kube-system/cilium-c5lft" May 13 07:34:39.412216 kubelet[1419]: E0513 07:34:39.412157 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:40.415218 kubelet[1419]: E0513 07:34:40.415152 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:40.635716 env[1157]: time="2025-05-13T07:34:40.635526190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8c7xx,Uid:a2e083c9-9496-4bae-8a15-47ba52acf890,Namespace:kube-system,Attempt:0,}" May 13 07:34:40.677197 env[1157]: time="2025-05-13T07:34:40.677087470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c5lft,Uid:dab432ae-82fd-4ff5-a56a-a79ec748d7c2,Namespace:kube-system,Attempt:0,}" May 13 07:34:40.704818 env[1157]: time="2025-05-13T07:34:40.704666856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 07:34:40.705331 env[1157]: time="2025-05-13T07:34:40.705238010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 07:34:40.705662 env[1157]: time="2025-05-13T07:34:40.705566093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 07:34:40.706715 env[1157]: time="2025-05-13T07:34:40.706565611Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/06a6c33fe924054bf59744bac916ba54259fc400b2f34733aecb6fd026023d3f pid=2951 runtime=io.containerd.runc.v2 May 13 07:34:40.730436 env[1157]: time="2025-05-13T07:34:40.730338060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 07:34:40.730436 env[1157]: time="2025-05-13T07:34:40.730383446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 07:34:40.730748 env[1157]: time="2025-05-13T07:34:40.730397262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 07:34:40.730748 env[1157]: time="2025-05-13T07:34:40.730551836Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617 pid=2969 runtime=io.containerd.runc.v2 May 13 07:34:40.751100 systemd[1]: Started cri-containerd-06a6c33fe924054bf59744bac916ba54259fc400b2f34733aecb6fd026023d3f.scope. May 13 07:34:40.776778 systemd[1]: Started cri-containerd-6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617.scope. May 13 07:34:40.820469 env[1157]: time="2025-05-13T07:34:40.820423517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c5lft,Uid:dab432ae-82fd-4ff5-a56a-a79ec748d7c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617\"" May 13 07:34:40.824001 env[1157]: time="2025-05-13T07:34:40.823926858Z" level=info msg="CreateContainer within sandbox \"6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 07:34:40.842188 env[1157]: time="2025-05-13T07:34:40.842126789Z" level=info msg="CreateContainer within sandbox \"6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c3447d890417c82320cbb276a514754ea2ecf8addd5132077a5b0906f6e874fd\"" May 13 07:34:40.843106 env[1157]: time="2025-05-13T07:34:40.843057475Z" level=info msg="StartContainer for \"c3447d890417c82320cbb276a514754ea2ecf8addd5132077a5b0906f6e874fd\"" May 13 07:34:40.848870 env[1157]: time="2025-05-13T07:34:40.848827308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8c7xx,Uid:a2e083c9-9496-4bae-8a15-47ba52acf890,Namespace:kube-system,Attempt:0,} returns sandbox id \"06a6c33fe924054bf59744bac916ba54259fc400b2f34733aecb6fd026023d3f\"" May 13 07:34:40.850732 env[1157]: time="2025-05-13T07:34:40.850707007Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 07:34:40.866725 systemd[1]: Started cri-containerd-c3447d890417c82320cbb276a514754ea2ecf8addd5132077a5b0906f6e874fd.scope. May 13 07:34:40.878448 systemd[1]: cri-containerd-c3447d890417c82320cbb276a514754ea2ecf8addd5132077a5b0906f6e874fd.scope: Deactivated successfully. May 13 07:34:40.896392 env[1157]: time="2025-05-13T07:34:40.896326661Z" level=info msg="shim disconnected" id=c3447d890417c82320cbb276a514754ea2ecf8addd5132077a5b0906f6e874fd May 13 07:34:40.896392 env[1157]: time="2025-05-13T07:34:40.896380443Z" level=warning msg="cleaning up after shim disconnected" id=c3447d890417c82320cbb276a514754ea2ecf8addd5132077a5b0906f6e874fd namespace=k8s.io May 13 07:34:40.896392 env[1157]: time="2025-05-13T07:34:40.896391885Z" level=info msg="cleaning up dead shim" May 13 07:34:40.903767 env[1157]: time="2025-05-13T07:34:40.903722561Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:34:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3052 runtime=io.containerd.runc.v2\ntime=\"2025-05-13T07:34:40Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c3447d890417c82320cbb276a514754ea2ecf8addd5132077a5b0906f6e874fd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 13 07:34:40.904098 env[1157]: time="2025-05-13T07:34:40.903955043Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" May 13 07:34:40.904319 env[1157]: time="2025-05-13T07:34:40.904259681Z" level=error msg="Failed to pipe stdout of container \"c3447d890417c82320cbb276a514754ea2ecf8addd5132077a5b0906f6e874fd\"" error="reading from a closed fifo" May 13 07:34:40.906130 env[1157]: time="2025-05-13T07:34:40.906074927Z" level=error msg="Failed to pipe stderr of container \"c3447d890417c82320cbb276a514754ea2ecf8addd5132077a5b0906f6e874fd\"" error="reading from a closed fifo" May 13 07:34:40.909848 env[1157]: time="2025-05-13T07:34:40.909787485Z" level=error msg="StartContainer for \"c3447d890417c82320cbb276a514754ea2ecf8addd5132077a5b0906f6e874fd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 13 07:34:40.910231 kubelet[1419]: E0513 07:34:40.910173 1419 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c3447d890417c82320cbb276a514754ea2ecf8addd5132077a5b0906f6e874fd" May 13 07:34:40.910771 kubelet[1419]: E0513 07:34:40.910739 1419 kuberuntime_manager.go:1341] "Unhandled Error" err=< May 13 07:34:40.910771 kubelet[1419]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 13 07:34:40.910771 kubelet[1419]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 13 07:34:40.910771 kubelet[1419]: rm /hostbin/cilium-mount May 13 07:34:40.910901 kubelet[1419]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ts2dx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-c5lft_kube-system(dab432ae-82fd-4ff5-a56a-a79ec748d7c2): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 13 07:34:40.910901 kubelet[1419]: > logger="UnhandledError" May 13 07:34:40.911914 kubelet[1419]: E0513 07:34:40.911875 1419 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-c5lft" podUID="dab432ae-82fd-4ff5-a56a-a79ec748d7c2" May 13 07:34:41.058101 env[1157]: time="2025-05-13T07:34:41.057770524Z" level=info msg="CreateContainer within sandbox \"6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" May 13 07:34:41.087602 env[1157]: time="2025-05-13T07:34:41.087502841Z" level=info msg="CreateContainer within sandbox \"6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"6ae23d1dcce78239b24d7c134698142a8fe3e4d5e5e2b8b776531b552b722488\"" May 13 07:34:41.089562 env[1157]: time="2025-05-13T07:34:41.088774845Z" level=info msg="StartContainer for \"6ae23d1dcce78239b24d7c134698142a8fe3e4d5e5e2b8b776531b552b722488\"" May 13 07:34:41.128432 systemd[1]: Started cri-containerd-6ae23d1dcce78239b24d7c134698142a8fe3e4d5e5e2b8b776531b552b722488.scope. May 13 07:34:41.173159 systemd[1]: cri-containerd-6ae23d1dcce78239b24d7c134698142a8fe3e4d5e5e2b8b776531b552b722488.scope: Deactivated successfully. May 13 07:34:41.186769 env[1157]: time="2025-05-13T07:34:41.186663491Z" level=info msg="shim disconnected" id=6ae23d1dcce78239b24d7c134698142a8fe3e4d5e5e2b8b776531b552b722488 May 13 07:34:41.187343 env[1157]: time="2025-05-13T07:34:41.187289317Z" level=warning msg="cleaning up after shim disconnected" id=6ae23d1dcce78239b24d7c134698142a8fe3e4d5e5e2b8b776531b552b722488 namespace=k8s.io May 13 07:34:41.187539 env[1157]: time="2025-05-13T07:34:41.187497493Z" level=info msg="cleaning up dead shim" May 13 07:34:41.207874 env[1157]: time="2025-05-13T07:34:41.207761370Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:34:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3089 runtime=io.containerd.runc.v2\ntime=\"2025-05-13T07:34:41Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6ae23d1dcce78239b24d7c134698142a8fe3e4d5e5e2b8b776531b552b722488/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 13 07:34:41.208849 env[1157]: time="2025-05-13T07:34:41.208725510Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" May 13 07:34:41.209679 env[1157]: time="2025-05-13T07:34:41.209057810Z" level=error msg="Failed to pipe stdout of container \"6ae23d1dcce78239b24d7c134698142a8fe3e4d5e5e2b8b776531b552b722488\"" error="reading from a closed fifo" May 13 07:34:41.209925 env[1157]: time="2025-05-13T07:34:41.209143683Z" level=error msg="Failed to pipe stderr of container \"6ae23d1dcce78239b24d7c134698142a8fe3e4d5e5e2b8b776531b552b722488\"" error="reading from a closed fifo" May 13 07:34:41.214786 env[1157]: time="2025-05-13T07:34:41.214697614Z" level=error msg="StartContainer for \"6ae23d1dcce78239b24d7c134698142a8fe3e4d5e5e2b8b776531b552b722488\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 13 07:34:41.215346 kubelet[1419]: E0513 07:34:41.215255 1419 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6ae23d1dcce78239b24d7c134698142a8fe3e4d5e5e2b8b776531b552b722488" May 13 07:34:41.215614 kubelet[1419]: E0513 07:34:41.215420 1419 kuberuntime_manager.go:1341] "Unhandled Error" err=< May 13 07:34:41.215614 kubelet[1419]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 13 07:34:41.215614 kubelet[1419]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 13 07:34:41.215614 kubelet[1419]: rm /hostbin/cilium-mount May 13 07:34:41.215614 kubelet[1419]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ts2dx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-c5lft_kube-system(dab432ae-82fd-4ff5-a56a-a79ec748d7c2): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 13 07:34:41.215614 kubelet[1419]: > logger="UnhandledError" May 13 07:34:41.217277 kubelet[1419]: E0513 07:34:41.217146 1419 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-c5lft" podUID="dab432ae-82fd-4ff5-a56a-a79ec748d7c2" May 13 07:34:41.418088 kubelet[1419]: E0513 07:34:41.416835 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:42.067542 kubelet[1419]: I0513 07:34:42.067480 1419 scope.go:117] "RemoveContainer" containerID="c3447d890417c82320cbb276a514754ea2ecf8addd5132077a5b0906f6e874fd" May 13 07:34:42.070050 env[1157]: time="2025-05-13T07:34:42.069907614Z" level=info msg="StopPodSandbox for \"6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617\"" May 13 07:34:42.071186 env[1157]: time="2025-05-13T07:34:42.071109273Z" level=info msg="Container to stop \"c3447d890417c82320cbb276a514754ea2ecf8addd5132077a5b0906f6e874fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 07:34:42.071375 env[1157]: time="2025-05-13T07:34:42.071188123Z" level=info msg="Container to stop \"6ae23d1dcce78239b24d7c134698142a8fe3e4d5e5e2b8b776531b552b722488\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 07:34:42.075915 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617-shm.mount: Deactivated successfully. May 13 07:34:42.091393 env[1157]: time="2025-05-13T07:34:42.091227163Z" level=info msg="RemoveContainer for \"c3447d890417c82320cbb276a514754ea2ecf8addd5132077a5b0906f6e874fd\"" May 13 07:34:42.101341 env[1157]: time="2025-05-13T07:34:42.101127066Z" level=info msg="RemoveContainer for \"c3447d890417c82320cbb276a514754ea2ecf8addd5132077a5b0906f6e874fd\" returns successfully" May 13 07:34:42.114237 systemd[1]: cri-containerd-6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617.scope: Deactivated successfully. May 13 07:34:42.165255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617-rootfs.mount: Deactivated successfully. May 13 07:34:42.173239 env[1157]: time="2025-05-13T07:34:42.173181242Z" level=info msg="shim disconnected" id=6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617 May 13 07:34:42.173567 env[1157]: time="2025-05-13T07:34:42.173543029Z" level=warning msg="cleaning up after shim disconnected" id=6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617 namespace=k8s.io May 13 07:34:42.173674 env[1157]: time="2025-05-13T07:34:42.173655361Z" level=info msg="cleaning up dead shim" May 13 07:34:42.180800 env[1157]: time="2025-05-13T07:34:42.180773596Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:34:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3122 runtime=io.containerd.runc.v2\n" May 13 07:34:42.181197 env[1157]: time="2025-05-13T07:34:42.181170299Z" level=info msg="TearDown network for sandbox \"6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617\" successfully" May 13 07:34:42.181310 env[1157]: time="2025-05-13T07:34:42.181289105Z" level=info msg="StopPodSandbox for \"6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617\" returns successfully" May 13 07:34:42.303479 kubelet[1419]: I0513 07:34:42.303384 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-hubble-tls\") pod \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " May 13 07:34:42.303479 kubelet[1419]: I0513 07:34:42.303467 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-host-proc-sys-kernel\") pod \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " May 13 07:34:42.303919 kubelet[1419]: I0513 07:34:42.303520 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-cni-path\") pod \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " May 13 07:34:42.303919 kubelet[1419]: I0513 07:34:42.303562 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-bpf-maps\") pod \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " May 13 07:34:42.303919 kubelet[1419]: I0513 07:34:42.303632 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-cilium-config-path\") pod \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " May 13 07:34:42.303919 kubelet[1419]: I0513 07:34:42.303680 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-clustermesh-secrets\") pod \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " May 13 07:34:42.303919 kubelet[1419]: I0513 07:34:42.303721 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-cilium-run\") pod \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " May 13 07:34:42.303919 kubelet[1419]: I0513 07:34:42.303760 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-hostproc\") pod \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " May 13 07:34:42.303919 kubelet[1419]: I0513 07:34:42.303798 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-cilium-cgroup\") pod \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " May 13 07:34:42.303919 kubelet[1419]: I0513 07:34:42.303839 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-xtables-lock\") pod \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " May 13 07:34:42.303919 kubelet[1419]: I0513 07:34:42.303890 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-cilium-ipsec-secrets\") pod \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " May 13 07:34:42.304801 kubelet[1419]: I0513 07:34:42.303929 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-lib-modules\") pod \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " May 13 07:34:42.304801 kubelet[1419]: I0513 07:34:42.304047 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-host-proc-sys-net\") pod \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " May 13 07:34:42.304801 kubelet[1419]: I0513 07:34:42.304128 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts2dx\" (UniqueName: \"kubernetes.io/projected/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-kube-api-access-ts2dx\") pod \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " May 13 07:34:42.304801 kubelet[1419]: I0513 07:34:42.304173 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-etc-cni-netd\") pod \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\" (UID: \"dab432ae-82fd-4ff5-a56a-a79ec748d7c2\") " May 13 07:34:42.304801 kubelet[1419]: I0513 07:34:42.304296 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dab432ae-82fd-4ff5-a56a-a79ec748d7c2" (UID: "dab432ae-82fd-4ff5-a56a-a79ec748d7c2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:42.307400 kubelet[1419]: I0513 07:34:42.305683 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dab432ae-82fd-4ff5-a56a-a79ec748d7c2" (UID: "dab432ae-82fd-4ff5-a56a-a79ec748d7c2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:42.307400 kubelet[1419]: I0513 07:34:42.305788 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dab432ae-82fd-4ff5-a56a-a79ec748d7c2" (UID: "dab432ae-82fd-4ff5-a56a-a79ec748d7c2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:42.307400 kubelet[1419]: I0513 07:34:42.306455 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-hostproc" (OuterVolumeSpecName: "hostproc") pod "dab432ae-82fd-4ff5-a56a-a79ec748d7c2" (UID: "dab432ae-82fd-4ff5-a56a-a79ec748d7c2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:42.307400 kubelet[1419]: I0513 07:34:42.306573 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dab432ae-82fd-4ff5-a56a-a79ec748d7c2" (UID: "dab432ae-82fd-4ff5-a56a-a79ec748d7c2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:42.307400 kubelet[1419]: I0513 07:34:42.306681 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-cni-path" (OuterVolumeSpecName: "cni-path") pod "dab432ae-82fd-4ff5-a56a-a79ec748d7c2" (UID: "dab432ae-82fd-4ff5-a56a-a79ec748d7c2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:42.307400 kubelet[1419]: I0513 07:34:42.306802 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dab432ae-82fd-4ff5-a56a-a79ec748d7c2" (UID: "dab432ae-82fd-4ff5-a56a-a79ec748d7c2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:42.314286 systemd[1]: var-lib-kubelet-pods-dab432ae\x2d82fd\x2d4ff5\x2da56a\x2da79ec748d7c2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 07:34:42.320581 kubelet[1419]: I0513 07:34:42.320249 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dab432ae-82fd-4ff5-a56a-a79ec748d7c2" (UID: "dab432ae-82fd-4ff5-a56a-a79ec748d7c2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:42.320581 kubelet[1419]: I0513 07:34:42.320504 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dab432ae-82fd-4ff5-a56a-a79ec748d7c2" (UID: "dab432ae-82fd-4ff5-a56a-a79ec748d7c2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:42.327315 systemd[1]: var-lib-kubelet-pods-dab432ae\x2d82fd\x2d4ff5\x2da56a\x2da79ec748d7c2-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 13 07:34:42.334531 kubelet[1419]: I0513 07:34:42.334461 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dab432ae-82fd-4ff5-a56a-a79ec748d7c2" (UID: "dab432ae-82fd-4ff5-a56a-a79ec748d7c2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 07:34:42.338927 kubelet[1419]: I0513 07:34:42.338836 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "dab432ae-82fd-4ff5-a56a-a79ec748d7c2" (UID: "dab432ae-82fd-4ff5-a56a-a79ec748d7c2"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 07:34:42.340368 kubelet[1419]: I0513 07:34:42.340290 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dab432ae-82fd-4ff5-a56a-a79ec748d7c2" (UID: "dab432ae-82fd-4ff5-a56a-a79ec748d7c2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 07:34:42.344226 kubelet[1419]: I0513 07:34:42.344169 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dab432ae-82fd-4ff5-a56a-a79ec748d7c2" (UID: "dab432ae-82fd-4ff5-a56a-a79ec748d7c2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 07:34:42.348934 kubelet[1419]: I0513 07:34:42.348825 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dab432ae-82fd-4ff5-a56a-a79ec748d7c2" (UID: "dab432ae-82fd-4ff5-a56a-a79ec748d7c2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 07:34:42.353266 kubelet[1419]: I0513 07:34:42.353200 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-kube-api-access-ts2dx" (OuterVolumeSpecName: "kube-api-access-ts2dx") pod "dab432ae-82fd-4ff5-a56a-a79ec748d7c2" (UID: "dab432ae-82fd-4ff5-a56a-a79ec748d7c2"). InnerVolumeSpecName "kube-api-access-ts2dx". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 07:34:42.405074 kubelet[1419]: I0513 07:34:42.404972 1419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-host-proc-sys-kernel\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:42.405074 kubelet[1419]: I0513 07:34:42.405021 1419 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-cni-path\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:42.405074 kubelet[1419]: I0513 07:34:42.405034 1419 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-bpf-maps\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:42.405074 kubelet[1419]: I0513 07:34:42.405045 1419 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-hubble-tls\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:42.405074 kubelet[1419]: I0513 07:34:42.405056 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-cilium-config-path\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:42.405074 kubelet[1419]: I0513 07:34:42.405067 1419 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-clustermesh-secrets\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:42.405074 kubelet[1419]: I0513 07:34:42.405077 1419 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-hostproc\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:42.405074 kubelet[1419]: I0513 07:34:42.405086 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-cilium-cgroup\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:42.405074 kubelet[1419]: I0513 07:34:42.405096 1419 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-xtables-lock\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:42.405074 kubelet[1419]: I0513 07:34:42.405106 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-cilium-run\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:42.406208 kubelet[1419]: I0513 07:34:42.405116 1419 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-lib-modules\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:42.406208 kubelet[1419]: I0513 07:34:42.405125 1419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-host-proc-sys-net\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:42.406208 kubelet[1419]: I0513 07:34:42.405135 1419 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ts2dx\" (UniqueName: \"kubernetes.io/projected/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-kube-api-access-ts2dx\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:42.406208 kubelet[1419]: I0513 07:34:42.405144 1419 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-etc-cni-netd\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:42.406208 kubelet[1419]: I0513 07:34:42.405156 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dab432ae-82fd-4ff5-a56a-a79ec748d7c2-cilium-ipsec-secrets\") on node \"172.24.4.185\" DevicePath \"\"" May 13 07:34:42.417313 kubelet[1419]: E0513 07:34:42.417242 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:42.465675 kubelet[1419]: E0513 07:34:42.465254 1419 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 07:34:42.670913 systemd[1]: var-lib-kubelet-pods-dab432ae\x2d82fd\x2d4ff5\x2da56a\x2da79ec748d7c2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dts2dx.mount: Deactivated successfully. May 13 07:34:42.672022 systemd[1]: var-lib-kubelet-pods-dab432ae\x2d82fd\x2d4ff5\x2da56a\x2da79ec748d7c2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 07:34:43.077909 kubelet[1419]: I0513 07:34:43.075373 1419 scope.go:117] "RemoveContainer" containerID="6ae23d1dcce78239b24d7c134698142a8fe3e4d5e5e2b8b776531b552b722488" May 13 07:34:43.080062 systemd[1]: Removed slice kubepods-burstable-poddab432ae_82fd_4ff5_a56a_a79ec748d7c2.slice. May 13 07:34:43.082058 env[1157]: time="2025-05-13T07:34:43.081925355Z" level=info msg="RemoveContainer for \"6ae23d1dcce78239b24d7c134698142a8fe3e4d5e5e2b8b776531b552b722488\"" May 13 07:34:43.087921 env[1157]: time="2025-05-13T07:34:43.087879859Z" level=info msg="RemoveContainer for \"6ae23d1dcce78239b24d7c134698142a8fe3e4d5e5e2b8b776531b552b722488\" returns successfully" May 13 07:34:43.161093 kubelet[1419]: I0513 07:34:43.161025 1419 memory_manager.go:355] "RemoveStaleState removing state" podUID="dab432ae-82fd-4ff5-a56a-a79ec748d7c2" containerName="mount-cgroup" May 13 07:34:43.161093 kubelet[1419]: I0513 07:34:43.161081 1419 memory_manager.go:355] "RemoveStaleState removing state" podUID="dab432ae-82fd-4ff5-a56a-a79ec748d7c2" containerName="mount-cgroup" May 13 07:34:43.169135 systemd[1]: Created slice kubepods-burstable-podf51e68a7_15bc_46e1_bf79_b296de2eb2a8.slice. May 13 07:34:43.312141 kubelet[1419]: I0513 07:34:43.311881 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f51e68a7-15bc-46e1-bf79-b296de2eb2a8-host-proc-sys-net\") pod \"cilium-ql4lm\" (UID: \"f51e68a7-15bc-46e1-bf79-b296de2eb2a8\") " pod="kube-system/cilium-ql4lm" May 13 07:34:43.312958 kubelet[1419]: I0513 07:34:43.312914 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f51e68a7-15bc-46e1-bf79-b296de2eb2a8-etc-cni-netd\") pod \"cilium-ql4lm\" (UID: \"f51e68a7-15bc-46e1-bf79-b296de2eb2a8\") " pod="kube-system/cilium-ql4lm" May 13 07:34:43.313364 kubelet[1419]: I0513 07:34:43.313324 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f51e68a7-15bc-46e1-bf79-b296de2eb2a8-lib-modules\") pod \"cilium-ql4lm\" (UID: \"f51e68a7-15bc-46e1-bf79-b296de2eb2a8\") " pod="kube-system/cilium-ql4lm" May 13 07:34:43.313757 kubelet[1419]: I0513 07:34:43.313693 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f51e68a7-15bc-46e1-bf79-b296de2eb2a8-hubble-tls\") pod \"cilium-ql4lm\" (UID: \"f51e68a7-15bc-46e1-bf79-b296de2eb2a8\") " pod="kube-system/cilium-ql4lm" May 13 07:34:43.314112 kubelet[1419]: I0513 07:34:43.314030 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f51e68a7-15bc-46e1-bf79-b296de2eb2a8-bpf-maps\") pod \"cilium-ql4lm\" (UID: \"f51e68a7-15bc-46e1-bf79-b296de2eb2a8\") " pod="kube-system/cilium-ql4lm" May 13 07:34:43.314438 kubelet[1419]: I0513 07:34:43.314356 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f51e68a7-15bc-46e1-bf79-b296de2eb2a8-hostproc\") pod \"cilium-ql4lm\" (UID: \"f51e68a7-15bc-46e1-bf79-b296de2eb2a8\") " pod="kube-system/cilium-ql4lm" May 13 07:34:43.314813 kubelet[1419]: I0513 07:34:43.314752 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f51e68a7-15bc-46e1-bf79-b296de2eb2a8-cilium-cgroup\") pod \"cilium-ql4lm\" (UID: \"f51e68a7-15bc-46e1-bf79-b296de2eb2a8\") " pod="kube-system/cilium-ql4lm" May 13 07:34:43.315173 kubelet[1419]: I0513 07:34:43.315076 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f51e68a7-15bc-46e1-bf79-b296de2eb2a8-cilium-ipsec-secrets\") pod \"cilium-ql4lm\" (UID: \"f51e68a7-15bc-46e1-bf79-b296de2eb2a8\") " pod="kube-system/cilium-ql4lm" May 13 07:34:43.315563 kubelet[1419]: I0513 07:34:43.315523 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f51e68a7-15bc-46e1-bf79-b296de2eb2a8-host-proc-sys-kernel\") pod \"cilium-ql4lm\" (UID: \"f51e68a7-15bc-46e1-bf79-b296de2eb2a8\") " pod="kube-system/cilium-ql4lm" May 13 07:34:43.316077 kubelet[1419]: I0513 07:34:43.316035 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f51e68a7-15bc-46e1-bf79-b296de2eb2a8-cilium-run\") pod \"cilium-ql4lm\" (UID: \"f51e68a7-15bc-46e1-bf79-b296de2eb2a8\") " pod="kube-system/cilium-ql4lm" May 13 07:34:43.316765 kubelet[1419]: I0513 07:34:43.316722 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f51e68a7-15bc-46e1-bf79-b296de2eb2a8-cni-path\") pod \"cilium-ql4lm\" (UID: \"f51e68a7-15bc-46e1-bf79-b296de2eb2a8\") " pod="kube-system/cilium-ql4lm" May 13 07:34:43.317577 kubelet[1419]: I0513 07:34:43.317376 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f51e68a7-15bc-46e1-bf79-b296de2eb2a8-clustermesh-secrets\") pod \"cilium-ql4lm\" (UID: \"f51e68a7-15bc-46e1-bf79-b296de2eb2a8\") " pod="kube-system/cilium-ql4lm" May 13 07:34:43.318125 kubelet[1419]: I0513 07:34:43.317934 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f51e68a7-15bc-46e1-bf79-b296de2eb2a8-xtables-lock\") pod \"cilium-ql4lm\" (UID: \"f51e68a7-15bc-46e1-bf79-b296de2eb2a8\") " pod="kube-system/cilium-ql4lm" May 13 07:34:43.318599 kubelet[1419]: I0513 07:34:43.318473 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f51e68a7-15bc-46e1-bf79-b296de2eb2a8-cilium-config-path\") pod \"cilium-ql4lm\" (UID: \"f51e68a7-15bc-46e1-bf79-b296de2eb2a8\") " pod="kube-system/cilium-ql4lm" May 13 07:34:43.319104 kubelet[1419]: I0513 07:34:43.318929 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnvqt\" (UniqueName: \"kubernetes.io/projected/f51e68a7-15bc-46e1-bf79-b296de2eb2a8-kube-api-access-lnvqt\") pod \"cilium-ql4lm\" (UID: \"f51e68a7-15bc-46e1-bf79-b296de2eb2a8\") " pod="kube-system/cilium-ql4lm" May 13 07:34:43.418379 kubelet[1419]: E0513 07:34:43.418084 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:43.506269 kubelet[1419]: I0513 07:34:43.506234 1419 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dab432ae-82fd-4ff5-a56a-a79ec748d7c2" path="/var/lib/kubelet/pods/dab432ae-82fd-4ff5-a56a-a79ec748d7c2/volumes" May 13 07:34:43.733258 env[1157]: time="2025-05-13T07:34:43.733140191Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:34:43.737821 env[1157]: time="2025-05-13T07:34:43.737761055Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:34:43.743051 env[1157]: time="2025-05-13T07:34:43.742952000Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 07:34:43.745258 env[1157]: time="2025-05-13T07:34:43.745122829Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 07:34:43.751720 env[1157]: time="2025-05-13T07:34:43.751632115Z" level=info msg="CreateContainer within sandbox \"06a6c33fe924054bf59744bac916ba54259fc400b2f34733aecb6fd026023d3f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 07:34:43.785231 env[1157]: time="2025-05-13T07:34:43.785145831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ql4lm,Uid:f51e68a7-15bc-46e1-bf79-b296de2eb2a8,Namespace:kube-system,Attempt:0,}" May 13 07:34:43.792464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3344205936.mount: Deactivated successfully. May 13 07:34:43.809010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2201687852.mount: Deactivated successfully. May 13 07:34:43.832963 env[1157]: time="2025-05-13T07:34:43.832794919Z" level=info msg="CreateContainer within sandbox \"06a6c33fe924054bf59744bac916ba54259fc400b2f34733aecb6fd026023d3f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4627c49b3bbc763892c2a75130cd3660a1407ac3ba81a29adab1774a173f0e2b\"" May 13 07:34:43.834805 env[1157]: time="2025-05-13T07:34:43.834490535Z" level=info msg="StartContainer for \"4627c49b3bbc763892c2a75130cd3660a1407ac3ba81a29adab1774a173f0e2b\"" May 13 07:34:43.872331 env[1157]: time="2025-05-13T07:34:43.871547233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 07:34:43.872331 env[1157]: time="2025-05-13T07:34:43.871697337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 07:34:43.872331 env[1157]: time="2025-05-13T07:34:43.871744416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 07:34:43.873787 env[1157]: time="2025-05-13T07:34:43.873498243Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a53e272db51dba8ad43c49de79e268e463eea9aeda39e431c429b00847536ff pid=3160 runtime=io.containerd.runc.v2 May 13 07:34:43.892308 systemd[1]: Started cri-containerd-4627c49b3bbc763892c2a75130cd3660a1407ac3ba81a29adab1774a173f0e2b.scope. May 13 07:34:43.901309 systemd[1]: Started cri-containerd-8a53e272db51dba8ad43c49de79e268e463eea9aeda39e431c429b00847536ff.scope. May 13 07:34:43.936946 env[1157]: time="2025-05-13T07:34:43.936884420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ql4lm,Uid:f51e68a7-15bc-46e1-bf79-b296de2eb2a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a53e272db51dba8ad43c49de79e268e463eea9aeda39e431c429b00847536ff\"" May 13 07:34:43.941280 env[1157]: time="2025-05-13T07:34:43.941237527Z" level=info msg="CreateContainer within sandbox \"8a53e272db51dba8ad43c49de79e268e463eea9aeda39e431c429b00847536ff\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 07:34:43.959873 env[1157]: time="2025-05-13T07:34:43.959780659Z" level=info msg="StartContainer for \"4627c49b3bbc763892c2a75130cd3660a1407ac3ba81a29adab1774a173f0e2b\" returns successfully" May 13 07:34:43.970507 env[1157]: time="2025-05-13T07:34:43.970447591Z" level=info msg="CreateContainer within sandbox \"8a53e272db51dba8ad43c49de79e268e463eea9aeda39e431c429b00847536ff\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0dd285849065d80ad0bbe7d9596abccb0858b84df5b406006b5a07bccefee1cf\"" May 13 07:34:43.972803 env[1157]: time="2025-05-13T07:34:43.972768703Z" level=info msg="StartContainer for \"0dd285849065d80ad0bbe7d9596abccb0858b84df5b406006b5a07bccefee1cf\"" May 13 07:34:43.998154 systemd[1]: Started cri-containerd-0dd285849065d80ad0bbe7d9596abccb0858b84df5b406006b5a07bccefee1cf.scope. May 13 07:34:44.004952 kubelet[1419]: W0513 07:34:44.004747 1419 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddab432ae_82fd_4ff5_a56a_a79ec748d7c2.slice/cri-containerd-c3447d890417c82320cbb276a514754ea2ecf8addd5132077a5b0906f6e874fd.scope WatchSource:0}: container "c3447d890417c82320cbb276a514754ea2ecf8addd5132077a5b0906f6e874fd" in namespace "k8s.io": not found May 13 07:34:44.049178 env[1157]: time="2025-05-13T07:34:44.049130838Z" level=info msg="StartContainer for \"0dd285849065d80ad0bbe7d9596abccb0858b84df5b406006b5a07bccefee1cf\" returns successfully" May 13 07:34:44.068500 systemd[1]: cri-containerd-0dd285849065d80ad0bbe7d9596abccb0858b84df5b406006b5a07bccefee1cf.scope: Deactivated successfully. May 13 07:34:44.366195 env[1157]: time="2025-05-13T07:34:44.365851861Z" level=info msg="shim disconnected" id=0dd285849065d80ad0bbe7d9596abccb0858b84df5b406006b5a07bccefee1cf May 13 07:34:44.366195 env[1157]: time="2025-05-13T07:34:44.366008919Z" level=warning msg="cleaning up after shim disconnected" id=0dd285849065d80ad0bbe7d9596abccb0858b84df5b406006b5a07bccefee1cf namespace=k8s.io May 13 07:34:44.366195 env[1157]: time="2025-05-13T07:34:44.366040309Z" level=info msg="cleaning up dead shim" May 13 07:34:44.376571 kubelet[1419]: I0513 07:34:44.376326 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-8c7xx" podStartSLOduration=2.47915227 podStartE2EDuration="5.376216635s" podCreationTimestamp="2025-05-13 07:34:39 +0000 UTC" firstStartedPulling="2025-05-13 07:34:40.850302289 +0000 UTC m=+84.283036284" lastFinishedPulling="2025-05-13 07:34:43.747366614 +0000 UTC m=+87.180100649" observedRunningTime="2025-05-13 07:34:44.189311342 +0000 UTC m=+87.622045327" watchObservedRunningTime="2025-05-13 07:34:44.376216635 +0000 UTC m=+87.808950670" May 13 07:34:44.387111 env[1157]: time="2025-05-13T07:34:44.386938747Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:34:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3270 runtime=io.containerd.runc.v2\n" May 13 07:34:44.419139 kubelet[1419]: E0513 07:34:44.419027 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:45.142668 env[1157]: time="2025-05-13T07:34:45.142581024Z" level=info msg="CreateContainer within sandbox \"8a53e272db51dba8ad43c49de79e268e463eea9aeda39e431c429b00847536ff\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 07:34:45.181919 env[1157]: time="2025-05-13T07:34:45.181755687Z" level=info msg="CreateContainer within sandbox \"8a53e272db51dba8ad43c49de79e268e463eea9aeda39e431c429b00847536ff\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bc4d55ce9647a70b01c4a35fc9546f30720ef84e36c399f187d996774f0d8ac4\"" May 13 07:34:45.183781 env[1157]: time="2025-05-13T07:34:45.183724159Z" level=info msg="StartContainer for \"bc4d55ce9647a70b01c4a35fc9546f30720ef84e36c399f187d996774f0d8ac4\"" May 13 07:34:45.236225 systemd[1]: Started cri-containerd-bc4d55ce9647a70b01c4a35fc9546f30720ef84e36c399f187d996774f0d8ac4.scope. May 13 07:34:45.267718 env[1157]: time="2025-05-13T07:34:45.267673834Z" level=info msg="StartContainer for \"bc4d55ce9647a70b01c4a35fc9546f30720ef84e36c399f187d996774f0d8ac4\" returns successfully" May 13 07:34:45.270581 systemd[1]: cri-containerd-bc4d55ce9647a70b01c4a35fc9546f30720ef84e36c399f187d996774f0d8ac4.scope: Deactivated successfully. May 13 07:34:45.294165 env[1157]: time="2025-05-13T07:34:45.294116813Z" level=info msg="shim disconnected" id=bc4d55ce9647a70b01c4a35fc9546f30720ef84e36c399f187d996774f0d8ac4 May 13 07:34:45.294451 env[1157]: time="2025-05-13T07:34:45.294427181Z" level=warning msg="cleaning up after shim disconnected" id=bc4d55ce9647a70b01c4a35fc9546f30720ef84e36c399f187d996774f0d8ac4 namespace=k8s.io May 13 07:34:45.294591 env[1157]: time="2025-05-13T07:34:45.294572066Z" level=info msg="cleaning up dead shim" May 13 07:34:45.301937 env[1157]: time="2025-05-13T07:34:45.301906210Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:34:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3335 runtime=io.containerd.runc.v2\n" May 13 07:34:45.419673 kubelet[1419]: E0513 07:34:45.419371 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:45.783047 systemd[1]: run-containerd-runc-k8s.io-bc4d55ce9647a70b01c4a35fc9546f30720ef84e36c399f187d996774f0d8ac4-runc.trJK7f.mount: Deactivated successfully. May 13 07:34:45.783310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc4d55ce9647a70b01c4a35fc9546f30720ef84e36c399f187d996774f0d8ac4-rootfs.mount: Deactivated successfully. May 13 07:34:46.152561 env[1157]: time="2025-05-13T07:34:46.151779198Z" level=info msg="CreateContainer within sandbox \"8a53e272db51dba8ad43c49de79e268e463eea9aeda39e431c429b00847536ff\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 07:34:46.190421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3757253426.mount: Deactivated successfully. May 13 07:34:46.208380 env[1157]: time="2025-05-13T07:34:46.208292204Z" level=info msg="CreateContainer within sandbox \"8a53e272db51dba8ad43c49de79e268e463eea9aeda39e431c429b00847536ff\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"764f20770924ad6edbdb6db522c8179d63fea9d67409fda8f6b0213bd7f038ac\"" May 13 07:34:46.210516 env[1157]: time="2025-05-13T07:34:46.210453290Z" level=info msg="StartContainer for \"764f20770924ad6edbdb6db522c8179d63fea9d67409fda8f6b0213bd7f038ac\"" May 13 07:34:46.259192 systemd[1]: Started cri-containerd-764f20770924ad6edbdb6db522c8179d63fea9d67409fda8f6b0213bd7f038ac.scope. May 13 07:34:46.296779 systemd[1]: cri-containerd-764f20770924ad6edbdb6db522c8179d63fea9d67409fda8f6b0213bd7f038ac.scope: Deactivated successfully. May 13 07:34:46.298209 env[1157]: time="2025-05-13T07:34:46.298173542Z" level=info msg="StartContainer for \"764f20770924ad6edbdb6db522c8179d63fea9d67409fda8f6b0213bd7f038ac\" returns successfully" May 13 07:34:46.323255 env[1157]: time="2025-05-13T07:34:46.323206214Z" level=info msg="shim disconnected" id=764f20770924ad6edbdb6db522c8179d63fea9d67409fda8f6b0213bd7f038ac May 13 07:34:46.323667 env[1157]: time="2025-05-13T07:34:46.323644816Z" level=warning msg="cleaning up after shim disconnected" id=764f20770924ad6edbdb6db522c8179d63fea9d67409fda8f6b0213bd7f038ac namespace=k8s.io May 13 07:34:46.323754 env[1157]: time="2025-05-13T07:34:46.323736870Z" level=info msg="cleaning up dead shim" May 13 07:34:46.332160 env[1157]: time="2025-05-13T07:34:46.332110520Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:34:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3391 runtime=io.containerd.runc.v2\n" May 13 07:34:46.421966 kubelet[1419]: E0513 07:34:46.420339 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:46.783371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-764f20770924ad6edbdb6db522c8179d63fea9d67409fda8f6b0213bd7f038ac-rootfs.mount: Deactivated successfully. May 13 07:34:47.160854 env[1157]: time="2025-05-13T07:34:47.160101569Z" level=info msg="CreateContainer within sandbox \"8a53e272db51dba8ad43c49de79e268e463eea9aeda39e431c429b00847536ff\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 07:34:47.197320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1118826477.mount: Deactivated successfully. May 13 07:34:47.218375 env[1157]: time="2025-05-13T07:34:47.218215814Z" level=info msg="CreateContainer within sandbox \"8a53e272db51dba8ad43c49de79e268e463eea9aeda39e431c429b00847536ff\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dc985b0e35d8f6cc177412e7896baf1883d1c296bf5cc259753308364bd5bdbc\"" May 13 07:34:47.219897 env[1157]: time="2025-05-13T07:34:47.219827557Z" level=info msg="StartContainer for \"dc985b0e35d8f6cc177412e7896baf1883d1c296bf5cc259753308364bd5bdbc\"" May 13 07:34:47.262963 systemd[1]: Started cri-containerd-dc985b0e35d8f6cc177412e7896baf1883d1c296bf5cc259753308364bd5bdbc.scope. May 13 07:34:47.314829 systemd[1]: cri-containerd-dc985b0e35d8f6cc177412e7896baf1883d1c296bf5cc259753308364bd5bdbc.scope: Deactivated successfully. May 13 07:34:47.318191 env[1157]: time="2025-05-13T07:34:47.318107856Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf51e68a7_15bc_46e1_bf79_b296de2eb2a8.slice/cri-containerd-dc985b0e35d8f6cc177412e7896baf1883d1c296bf5cc259753308364bd5bdbc.scope/memory.events\": no such file or directory" May 13 07:34:47.321727 env[1157]: time="2025-05-13T07:34:47.321668502Z" level=info msg="StartContainer for \"dc985b0e35d8f6cc177412e7896baf1883d1c296bf5cc259753308364bd5bdbc\" returns successfully" May 13 07:34:47.346854 env[1157]: time="2025-05-13T07:34:47.346799500Z" level=info msg="shim disconnected" id=dc985b0e35d8f6cc177412e7896baf1883d1c296bf5cc259753308364bd5bdbc May 13 07:34:47.347147 env[1157]: time="2025-05-13T07:34:47.347122542Z" level=warning msg="cleaning up after shim disconnected" id=dc985b0e35d8f6cc177412e7896baf1883d1c296bf5cc259753308364bd5bdbc namespace=k8s.io May 13 07:34:47.347255 env[1157]: time="2025-05-13T07:34:47.347237430Z" level=info msg="cleaning up dead shim" May 13 07:34:47.354747 env[1157]: time="2025-05-13T07:34:47.354699178Z" level=warning msg="cleanup warnings time=\"2025-05-13T07:34:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3450 runtime=io.containerd.runc.v2\n" May 13 07:34:47.421821 kubelet[1419]: E0513 07:34:47.421565 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:47.477401 kubelet[1419]: E0513 07:34:47.477244 1419 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 07:34:47.783583 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc985b0e35d8f6cc177412e7896baf1883d1c296bf5cc259753308364bd5bdbc-rootfs.mount: Deactivated successfully. May 13 07:34:48.175120 env[1157]: time="2025-05-13T07:34:48.174757884Z" level=info msg="CreateContainer within sandbox \"8a53e272db51dba8ad43c49de79e268e463eea9aeda39e431c429b00847536ff\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 07:34:48.217268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2275754908.mount: Deactivated successfully. May 13 07:34:48.232355 env[1157]: time="2025-05-13T07:34:48.232176752Z" level=info msg="CreateContainer within sandbox \"8a53e272db51dba8ad43c49de79e268e463eea9aeda39e431c429b00847536ff\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6120b7ed7622c80dd12b2c7c21536ad592ec1fdbf8e579d7ded4805c265b8c4a\"" May 13 07:34:48.234112 env[1157]: time="2025-05-13T07:34:48.233599657Z" level=info msg="StartContainer for \"6120b7ed7622c80dd12b2c7c21536ad592ec1fdbf8e579d7ded4805c265b8c4a\"" May 13 07:34:48.270097 systemd[1]: Started cri-containerd-6120b7ed7622c80dd12b2c7c21536ad592ec1fdbf8e579d7ded4805c265b8c4a.scope. May 13 07:34:48.307799 env[1157]: time="2025-05-13T07:34:48.307734191Z" level=info msg="StartContainer for \"6120b7ed7622c80dd12b2c7c21536ad592ec1fdbf8e579d7ded4805c265b8c4a\" returns successfully" May 13 07:34:48.422624 kubelet[1419]: E0513 07:34:48.422571 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:48.808101 kernel: cryptd: max_cpu_qlen set to 1000 May 13 07:34:48.868053 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) May 13 07:34:49.255718 kubelet[1419]: I0513 07:34:49.255256 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ql4lm" podStartSLOduration=6.2551338340000004 podStartE2EDuration="6.255133834s" podCreationTimestamp="2025-05-13 07:34:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 07:34:49.249639187 +0000 UTC m=+92.682373222" watchObservedRunningTime="2025-05-13 07:34:49.255133834 +0000 UTC m=+92.687867869" May 13 07:34:49.423094 kubelet[1419]: E0513 07:34:49.423025 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:50.423441 kubelet[1419]: E0513 07:34:50.423378 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:51.424432 kubelet[1419]: E0513 07:34:51.424381 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:51.575937 systemd[1]: run-containerd-runc-k8s.io-6120b7ed7622c80dd12b2c7c21536ad592ec1fdbf8e579d7ded4805c265b8c4a-runc.qKubn3.mount: Deactivated successfully. May 13 07:34:52.344884 systemd-networkd[992]: lxc_health: Link UP May 13 07:34:52.358213 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 07:34:52.356044 systemd-networkd[992]: lxc_health: Gained carrier May 13 07:34:52.425198 kubelet[1419]: E0513 07:34:52.425125 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:53.425941 kubelet[1419]: E0513 07:34:53.425757 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:53.570602 systemd-networkd[992]: lxc_health: Gained IPv6LL May 13 07:34:53.913388 systemd[1]: run-containerd-runc-k8s.io-6120b7ed7622c80dd12b2c7c21536ad592ec1fdbf8e579d7ded4805c265b8c4a-runc.YxhGe5.mount: Deactivated successfully. May 13 07:34:54.428560 kubelet[1419]: E0513 07:34:54.428377 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:55.431617 kubelet[1419]: E0513 07:34:55.431508 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:56.171331 systemd[1]: run-containerd-runc-k8s.io-6120b7ed7622c80dd12b2c7c21536ad592ec1fdbf8e579d7ded4805c265b8c4a-runc.XWz0iv.mount: Deactivated successfully. May 13 07:34:56.433424 kubelet[1419]: E0513 07:34:56.433333 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:57.297410 kubelet[1419]: E0513 07:34:57.297289 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:57.435356 kubelet[1419]: E0513 07:34:57.435291 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:58.398537 systemd[1]: run-containerd-runc-k8s.io-6120b7ed7622c80dd12b2c7c21536ad592ec1fdbf8e579d7ded4805c265b8c4a-runc.ybZLCC.mount: Deactivated successfully. May 13 07:34:58.437224 kubelet[1419]: E0513 07:34:58.437051 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:34:59.437949 kubelet[1419]: E0513 07:34:59.437855 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:00.438586 kubelet[1419]: E0513 07:35:00.438517 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:01.440065 kubelet[1419]: E0513 07:35:01.439942 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:02.440576 kubelet[1419]: E0513 07:35:02.440448 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:03.441529 kubelet[1419]: E0513 07:35:03.441450 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:04.442195 kubelet[1419]: E0513 07:35:04.442063 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:05.443196 kubelet[1419]: E0513 07:35:05.443122 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:06.444437 kubelet[1419]: E0513 07:35:06.444269 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:07.444815 kubelet[1419]: E0513 07:35:07.444665 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:08.445271 kubelet[1419]: E0513 07:35:08.445178 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:09.445458 kubelet[1419]: E0513 07:35:09.445362 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:10.445161 update_engine[1149]: I0513 07:35:10.444854 1149 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 13 07:35:10.445161 update_engine[1149]: I0513 07:35:10.445125 1149 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 13 07:35:10.446547 kubelet[1419]: E0513 07:35:10.446465 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:10.447268 update_engine[1149]: I0513 07:35:10.447206 1149 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 13 07:35:10.448761 update_engine[1149]: I0513 07:35:10.448669 1149 omaha_request_params.cc:62] Current group set to lts May 13 07:35:10.450136 update_engine[1149]: I0513 07:35:10.449954 1149 update_attempter.cc:499] Already updated boot flags. Skipping. May 13 07:35:10.450136 update_engine[1149]: I0513 07:35:10.450028 1149 update_attempter.cc:643] Scheduling an action processor start. May 13 07:35:10.450136 update_engine[1149]: I0513 07:35:10.450120 1149 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 13 07:35:10.450543 update_engine[1149]: I0513 07:35:10.450291 1149 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 13 07:35:10.450543 update_engine[1149]: I0513 07:35:10.450452 1149 omaha_request_action.cc:270] Posting an Omaha request to disabled May 13 07:35:10.450543 update_engine[1149]: I0513 07:35:10.450472 1149 omaha_request_action.cc:271] Request: May 13 07:35:10.450543 update_engine[1149]: May 13 07:35:10.450543 update_engine[1149]: May 13 07:35:10.450543 update_engine[1149]: May 13 07:35:10.450543 update_engine[1149]: May 13 07:35:10.450543 update_engine[1149]: May 13 07:35:10.450543 update_engine[1149]: May 13 07:35:10.450543 update_engine[1149]: May 13 07:35:10.450543 update_engine[1149]: May 13 07:35:10.450543 update_engine[1149]: I0513 07:35:10.450497 1149 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 07:35:10.472149 locksmithd[1195]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 13 07:35:10.476075 update_engine[1149]: I0513 07:35:10.475535 1149 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 07:35:10.476375 update_engine[1149]: E0513 07:35:10.476194 1149 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 07:35:10.476493 update_engine[1149]: I0513 07:35:10.476428 1149 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 13 07:35:11.447635 kubelet[1419]: E0513 07:35:11.447485 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:12.449727 kubelet[1419]: E0513 07:35:12.449644 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:13.451722 kubelet[1419]: E0513 07:35:13.451515 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:14.452712 kubelet[1419]: E0513 07:35:14.452642 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:15.454318 kubelet[1419]: E0513 07:35:15.454254 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:16.455875 kubelet[1419]: E0513 07:35:16.455800 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:17.297092 kubelet[1419]: E0513 07:35:17.297030 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:17.362387 env[1157]: time="2025-05-13T07:35:17.362231336Z" level=info msg="StopPodSandbox for \"6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617\"" May 13 07:35:17.363887 env[1157]: time="2025-05-13T07:35:17.363720736Z" level=info msg="TearDown network for sandbox \"6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617\" successfully" May 13 07:35:17.364167 env[1157]: time="2025-05-13T07:35:17.364112224Z" level=info msg="StopPodSandbox for \"6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617\" returns successfully" May 13 07:35:17.366457 env[1157]: time="2025-05-13T07:35:17.366358229Z" level=info msg="RemovePodSandbox for \"6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617\"" May 13 07:35:17.366679 env[1157]: time="2025-05-13T07:35:17.366472805Z" level=info msg="Forcibly stopping sandbox \"6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617\"" May 13 07:35:17.366793 env[1157]: time="2025-05-13T07:35:17.366703321Z" level=info msg="TearDown network for sandbox \"6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617\" successfully" May 13 07:35:17.379473 env[1157]: time="2025-05-13T07:35:17.379391067Z" level=info msg="RemovePodSandbox \"6ca2852c9fb7207c5d716a640352f0f0bdf16f950b1fdb1683090090b6b43617\" returns successfully" May 13 07:35:17.380478 env[1157]: time="2025-05-13T07:35:17.380414157Z" level=info msg="StopPodSandbox for \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\"" May 13 07:35:17.381034 env[1157]: time="2025-05-13T07:35:17.380886788Z" level=info msg="TearDown network for sandbox \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\" successfully" May 13 07:35:17.381440 env[1157]: time="2025-05-13T07:35:17.381389105Z" level=info msg="StopPodSandbox for \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\" returns successfully" May 13 07:35:17.382493 env[1157]: time="2025-05-13T07:35:17.382405922Z" level=info msg="RemovePodSandbox for \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\"" May 13 07:35:17.382666 env[1157]: time="2025-05-13T07:35:17.382512003Z" level=info msg="Forcibly stopping sandbox \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\"" May 13 07:35:17.382799 env[1157]: time="2025-05-13T07:35:17.382721328Z" level=info msg="TearDown network for sandbox \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\" successfully" May 13 07:35:17.389855 env[1157]: time="2025-05-13T07:35:17.389638081Z" level=info msg="RemovePodSandbox \"68df8711cef610d078e8445db18e50e1fcef012469d1c7b0ca7ceed7cc06c7b5\" returns successfully" May 13 07:35:17.457488 kubelet[1419]: E0513 07:35:17.457434 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:18.459383 kubelet[1419]: E0513 07:35:18.459317 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:19.460866 kubelet[1419]: E0513 07:35:19.460745 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:20.423077 update_engine[1149]: I0513 07:35:20.422487 1149 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 07:35:20.423077 update_engine[1149]: I0513 07:35:20.422914 1149 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 07:35:20.423954 update_engine[1149]: E0513 07:35:20.423146 1149 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 07:35:20.423954 update_engine[1149]: I0513 07:35:20.423295 1149 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 13 07:35:20.462447 kubelet[1419]: E0513 07:35:20.462382 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:21.463140 kubelet[1419]: E0513 07:35:21.463063 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:22.464440 kubelet[1419]: E0513 07:35:22.464281 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:23.465308 kubelet[1419]: E0513 07:35:23.465191 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:24.466069 kubelet[1419]: E0513 07:35:24.465952 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:25.467939 kubelet[1419]: E0513 07:35:25.467845 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:26.468865 kubelet[1419]: E0513 07:35:26.468774 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:27.469820 kubelet[1419]: E0513 07:35:27.469745 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:28.471379 kubelet[1419]: E0513 07:35:28.471314 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:29.472297 kubelet[1419]: E0513 07:35:29.472217 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:30.430556 update_engine[1149]: I0513 07:35:30.430456 1149 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 07:35:30.431387 update_engine[1149]: I0513 07:35:30.431106 1149 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 07:35:30.431387 update_engine[1149]: E0513 07:35:30.431335 1149 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 07:35:30.431645 update_engine[1149]: I0513 07:35:30.431483 1149 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 13 07:35:30.474358 kubelet[1419]: E0513 07:35:30.474277 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:31.475238 kubelet[1419]: E0513 07:35:31.475165 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:32.476034 kubelet[1419]: E0513 07:35:32.475927 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:33.477520 kubelet[1419]: E0513 07:35:33.477458 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:34.478716 kubelet[1419]: E0513 07:35:34.478563 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:35.479509 kubelet[1419]: E0513 07:35:35.479394 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:36.480419 kubelet[1419]: E0513 07:35:36.480353 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:37.297436 kubelet[1419]: E0513 07:35:37.297378 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:37.482266 kubelet[1419]: E0513 07:35:37.482186 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:38.482885 kubelet[1419]: E0513 07:35:38.482813 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:39.483529 kubelet[1419]: E0513 07:35:39.483472 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:40.426340 update_engine[1149]: I0513 07:35:40.426177 1149 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 07:35:40.427117 update_engine[1149]: I0513 07:35:40.426698 1149 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 07:35:40.427117 update_engine[1149]: E0513 07:35:40.426967 1149 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 07:35:40.427365 update_engine[1149]: I0513 07:35:40.427178 1149 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 13 07:35:40.427365 update_engine[1149]: I0513 07:35:40.427206 1149 omaha_request_action.cc:621] Omaha request response: May 13 07:35:40.427519 update_engine[1149]: E0513 07:35:40.427373 1149 omaha_request_action.cc:640] Omaha request network transfer failed. May 13 07:35:40.427519 update_engine[1149]: I0513 07:35:40.427428 1149 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 13 07:35:40.427519 update_engine[1149]: I0513 07:35:40.427438 1149 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 07:35:40.427519 update_engine[1149]: I0513 07:35:40.427445 1149 update_attempter.cc:306] Processing Done. May 13 07:35:40.427519 update_engine[1149]: E0513 07:35:40.427481 1149 update_attempter.cc:619] Update failed. May 13 07:35:40.427519 update_engine[1149]: I0513 07:35:40.427505 1149 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 13 07:35:40.427519 update_engine[1149]: I0513 07:35:40.427515 1149 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 13 07:35:40.428268 update_engine[1149]: I0513 07:35:40.427531 1149 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 13 07:35:40.428268 update_engine[1149]: I0513 07:35:40.427680 1149 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 13 07:35:40.428268 update_engine[1149]: I0513 07:35:40.427726 1149 omaha_request_action.cc:270] Posting an Omaha request to disabled May 13 07:35:40.428268 update_engine[1149]: I0513 07:35:40.427739 1149 omaha_request_action.cc:271] Request: May 13 07:35:40.428268 update_engine[1149]: May 13 07:35:40.428268 update_engine[1149]: May 13 07:35:40.428268 update_engine[1149]: May 13 07:35:40.428268 update_engine[1149]: May 13 07:35:40.428268 update_engine[1149]: May 13 07:35:40.428268 update_engine[1149]: May 13 07:35:40.428268 update_engine[1149]: I0513 07:35:40.427748 1149 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 13 07:35:40.428268 update_engine[1149]: I0513 07:35:40.428056 1149 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 13 07:35:40.428268 update_engine[1149]: E0513 07:35:40.428211 1149 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 13 07:35:40.429483 update_engine[1149]: I0513 07:35:40.428334 1149 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 13 07:35:40.429483 update_engine[1149]: I0513 07:35:40.428347 1149 omaha_request_action.cc:621] Omaha request response: May 13 07:35:40.429483 update_engine[1149]: I0513 07:35:40.428357 1149 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 07:35:40.429483 update_engine[1149]: I0513 07:35:40.428364 1149 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 13 07:35:40.429483 update_engine[1149]: I0513 07:35:40.428370 1149 update_attempter.cc:306] Processing Done. May 13 07:35:40.429483 update_engine[1149]: I0513 07:35:40.428378 1149 update_attempter.cc:310] Error event sent. May 13 07:35:40.429483 update_engine[1149]: I0513 07:35:40.428421 1149 update_check_scheduler.cc:74] Next update check in 48m28s May 13 07:35:40.430352 locksmithd[1195]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 13 07:35:40.430352 locksmithd[1195]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 13 07:35:40.484811 kubelet[1419]: E0513 07:35:40.484734 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:41.485319 kubelet[1419]: E0513 07:35:41.485206 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:42.486308 kubelet[1419]: E0513 07:35:42.486199 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:43.488014 kubelet[1419]: E0513 07:35:43.487914 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:44.489203 kubelet[1419]: E0513 07:35:44.489118 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:45.489964 kubelet[1419]: E0513 07:35:45.489850 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:46.491189 kubelet[1419]: E0513 07:35:46.491073 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:47.491739 kubelet[1419]: E0513 07:35:47.491680 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:48.492800 kubelet[1419]: E0513 07:35:48.492741 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:49.494412 kubelet[1419]: E0513 07:35:49.494315 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:50.495621 kubelet[1419]: E0513 07:35:50.495534 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:51.495895 kubelet[1419]: E0513 07:35:51.495800 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:52.496279 kubelet[1419]: E0513 07:35:52.496131 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:53.497275 kubelet[1419]: E0513 07:35:53.497196 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:54.497917 kubelet[1419]: E0513 07:35:54.497859 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:55.498886 kubelet[1419]: E0513 07:35:55.498826 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:56.500547 kubelet[1419]: E0513 07:35:56.500396 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:57.296822 kubelet[1419]: E0513 07:35:57.296764 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:57.501501 kubelet[1419]: E0513 07:35:57.501341 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:58.501675 kubelet[1419]: E0513 07:35:58.501563 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:35:59.502538 kubelet[1419]: E0513 07:35:59.502457 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:00.503433 kubelet[1419]: E0513 07:36:00.503368 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:01.504018 kubelet[1419]: E0513 07:36:01.503900 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:02.505186 kubelet[1419]: E0513 07:36:02.505078 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:03.506074 kubelet[1419]: E0513 07:36:03.506025 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:04.507712 kubelet[1419]: E0513 07:36:04.507551 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:05.512482 kubelet[1419]: E0513 07:36:05.512350 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:06.513194 kubelet[1419]: E0513 07:36:06.513073 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:07.514130 kubelet[1419]: E0513 07:36:07.514032 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:08.514660 kubelet[1419]: E0513 07:36:08.514532 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:09.515572 kubelet[1419]: E0513 07:36:09.515388 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:10.516134 kubelet[1419]: E0513 07:36:10.516072 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:11.518259 kubelet[1419]: E0513 07:36:11.518201 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:12.520216 kubelet[1419]: E0513 07:36:12.520067 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:13.520295 kubelet[1419]: E0513 07:36:13.520232 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:14.522372 kubelet[1419]: E0513 07:36:14.522309 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:15.524354 kubelet[1419]: E0513 07:36:15.524294 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:16.525646 kubelet[1419]: E0513 07:36:16.525520 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:17.297148 kubelet[1419]: E0513 07:36:17.297087 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:17.526284 kubelet[1419]: E0513 07:36:17.526210 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:18.527098 kubelet[1419]: E0513 07:36:18.527032 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:19.527339 kubelet[1419]: E0513 07:36:19.527184 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:20.528264 kubelet[1419]: E0513 07:36:20.528145 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:21.528778 kubelet[1419]: E0513 07:36:21.528698 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:22.529810 kubelet[1419]: E0513 07:36:22.529750 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:23.531023 kubelet[1419]: E0513 07:36:23.530928 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:24.531386 kubelet[1419]: E0513 07:36:24.531165 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:25.532038 kubelet[1419]: E0513 07:36:25.531937 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:26.532649 kubelet[1419]: E0513 07:36:26.532556 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:27.533171 kubelet[1419]: E0513 07:36:27.533108 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:28.535314 kubelet[1419]: E0513 07:36:28.535150 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:29.536443 kubelet[1419]: E0513 07:36:29.536229 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:30.537431 kubelet[1419]: E0513 07:36:30.537237 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:31.538165 kubelet[1419]: E0513 07:36:31.538026 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:32.539149 kubelet[1419]: E0513 07:36:32.539043 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:33.540133 kubelet[1419]: E0513 07:36:33.540073 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:34.542104 kubelet[1419]: E0513 07:36:34.542038 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:35.542827 kubelet[1419]: E0513 07:36:35.542708 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:36.543068 kubelet[1419]: E0513 07:36:36.542941 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:37.297391 kubelet[1419]: E0513 07:36:37.297310 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:37.544741 kubelet[1419]: E0513 07:36:37.544682 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:38.546642 kubelet[1419]: E0513 07:36:38.546537 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:39.547839 kubelet[1419]: E0513 07:36:39.547749 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:40.548453 kubelet[1419]: E0513 07:36:40.548341 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:41.548964 kubelet[1419]: E0513 07:36:41.548885 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 07:36:42.550969 kubelet[1419]: E0513 07:36:42.550909 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"