Jul 2 08:52:42.025698 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 08:52:42.025740 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:52:42.025765 kernel: BIOS-provided physical RAM map: Jul 2 08:52:42.025780 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 08:52:42.025794 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 08:52:42.025809 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 08:52:42.025826 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Jul 2 08:52:42.025841 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Jul 2 08:52:42.025858 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 08:52:42.025872 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 08:52:42.025886 kernel: NX (Execute Disable) protection: active Jul 2 08:52:42.025900 kernel: SMBIOS 2.8 present. Jul 2 08:52:42.025914 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Jul 2 08:52:42.025928 kernel: Hypervisor detected: KVM Jul 2 08:52:42.025947 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 08:52:42.025965 kernel: kvm-clock: cpu 0, msr 6e192001, primary cpu clock Jul 2 08:52:42.025980 kernel: kvm-clock: using sched offset of 8617048090 cycles Jul 2 08:52:42.025996 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 08:52:42.026012 kernel: tsc: Detected 1996.249 MHz processor Jul 2 08:52:42.026028 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 08:52:42.026044 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 08:52:42.026060 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Jul 2 08:52:42.026076 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 08:52:42.026095 kernel: ACPI: Early table checksum verification disabled Jul 2 08:52:42.026110 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Jul 2 08:52:42.026126 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:52:42.030170 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:52:42.030180 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:52:42.030188 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jul 2 08:52:42.030195 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:52:42.030203 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:52:42.030211 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Jul 2 08:52:42.030222 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Jul 2 08:52:42.030230 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jul 2 08:52:42.030237 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Jul 2 08:52:42.030245 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Jul 2 08:52:42.030252 kernel: No NUMA configuration found Jul 2 08:52:42.030259 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Jul 2 08:52:42.030267 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Jul 2 08:52:42.030275 kernel: Zone ranges: Jul 2 08:52:42.030288 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 08:52:42.030296 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Jul 2 08:52:42.030304 kernel: Normal empty Jul 2 08:52:42.030312 kernel: Movable zone start for each node Jul 2 08:52:42.030319 kernel: Early memory node ranges Jul 2 08:52:42.030327 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 08:52:42.030336 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Jul 2 08:52:42.030344 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Jul 2 08:52:42.030352 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 08:52:42.030360 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 08:52:42.030368 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Jul 2 08:52:42.030376 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 08:52:42.030383 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 08:52:42.030391 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 08:52:42.030399 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 08:52:42.030410 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 08:52:42.030419 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 08:52:42.030427 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 08:52:42.030436 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 08:52:42.030444 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 08:52:42.030452 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 08:52:42.030460 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jul 2 08:52:42.030468 kernel: Booting paravirtualized kernel on KVM Jul 2 08:52:42.030477 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 08:52:42.030486 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Jul 2 08:52:42.030497 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Jul 2 08:52:42.030506 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Jul 2 08:52:42.030514 kernel: pcpu-alloc: [0] 0 1 Jul 2 08:52:42.030523 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Jul 2 08:52:42.030531 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 2 08:52:42.030539 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Jul 2 08:52:42.030548 kernel: Policy zone: DMA32 Jul 2 08:52:42.030557 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:52:42.030568 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 08:52:42.030577 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 08:52:42.030585 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 08:52:42.030594 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 08:52:42.030603 kernel: Memory: 1973284K/2096620K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 123076K reserved, 0K cma-reserved) Jul 2 08:52:42.030611 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 08:52:42.030620 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 08:52:42.030628 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 08:52:42.030639 kernel: rcu: Hierarchical RCU implementation. Jul 2 08:52:42.030649 kernel: rcu: RCU event tracing is enabled. Jul 2 08:52:42.030657 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 08:52:42.030666 kernel: Rude variant of Tasks RCU enabled. Jul 2 08:52:42.030674 kernel: Tracing variant of Tasks RCU enabled. Jul 2 08:52:42.030683 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 08:52:42.030692 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 08:52:42.030700 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 08:52:42.030708 kernel: Console: colour VGA+ 80x25 Jul 2 08:52:42.030719 kernel: printk: console [tty0] enabled Jul 2 08:52:42.030727 kernel: printk: console [ttyS0] enabled Jul 2 08:52:42.030736 kernel: ACPI: Core revision 20210730 Jul 2 08:52:42.030744 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 08:52:42.030752 kernel: x2apic enabled Jul 2 08:52:42.030761 kernel: Switched APIC routing to physical x2apic. Jul 2 08:52:42.030770 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 08:52:42.030778 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 08:52:42.030786 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jul 2 08:52:42.030795 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 2 08:52:42.030805 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 2 08:52:42.030814 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 08:52:42.030822 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 08:52:42.030830 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 08:52:42.030839 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 08:52:42.030847 kernel: Speculative Store Bypass: Vulnerable Jul 2 08:52:42.030856 kernel: x86/fpu: x87 FPU will use FXSAVE Jul 2 08:52:42.030864 kernel: Freeing SMP alternatives memory: 32K Jul 2 08:52:42.030872 kernel: pid_max: default: 32768 minimum: 301 Jul 2 08:52:42.030883 kernel: LSM: Security Framework initializing Jul 2 08:52:42.030892 kernel: SELinux: Initializing. Jul 2 08:52:42.030900 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 08:52:42.030909 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 08:52:42.030917 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jul 2 08:52:42.030926 kernel: Performance Events: AMD PMU driver. Jul 2 08:52:42.030934 kernel: ... version: 0 Jul 2 08:52:42.030943 kernel: ... bit width: 48 Jul 2 08:52:42.030951 kernel: ... generic registers: 4 Jul 2 08:52:42.030968 kernel: ... value mask: 0000ffffffffffff Jul 2 08:52:42.030976 kernel: ... max period: 00007fffffffffff Jul 2 08:52:42.030987 kernel: ... fixed-purpose events: 0 Jul 2 08:52:42.030995 kernel: ... event mask: 000000000000000f Jul 2 08:52:42.031004 kernel: signal: max sigframe size: 1440 Jul 2 08:52:42.031013 kernel: rcu: Hierarchical SRCU implementation. Jul 2 08:52:42.031021 kernel: smp: Bringing up secondary CPUs ... Jul 2 08:52:42.031030 kernel: x86: Booting SMP configuration: Jul 2 08:52:42.031040 kernel: .... node #0, CPUs: #1 Jul 2 08:52:42.031049 kernel: kvm-clock: cpu 1, msr 6e192041, secondary cpu clock Jul 2 08:52:42.031057 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Jul 2 08:52:42.031066 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 08:52:42.031075 kernel: smpboot: Max logical packages: 2 Jul 2 08:52:42.031083 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jul 2 08:52:42.031092 kernel: devtmpfs: initialized Jul 2 08:52:42.031101 kernel: x86/mm: Memory block size: 128MB Jul 2 08:52:42.031109 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 08:52:42.031120 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 08:52:42.031129 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 08:52:42.031152 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 08:52:42.031161 kernel: audit: initializing netlink subsys (disabled) Jul 2 08:52:42.031170 kernel: audit: type=2000 audit(1719910361.420:1): state=initialized audit_enabled=0 res=1 Jul 2 08:52:42.031178 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 08:52:42.031187 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 08:52:42.031195 kernel: cpuidle: using governor menu Jul 2 08:52:42.031204 kernel: ACPI: bus type PCI registered Jul 2 08:52:42.031215 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 08:52:42.031224 kernel: dca service started, version 1.12.1 Jul 2 08:52:42.031232 kernel: PCI: Using configuration type 1 for base access Jul 2 08:52:42.031241 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 08:52:42.031250 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 08:52:42.031259 kernel: ACPI: Added _OSI(Module Device) Jul 2 08:52:42.031267 kernel: ACPI: Added _OSI(Processor Device) Jul 2 08:52:42.031276 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 08:52:42.031285 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 08:52:42.031295 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 08:52:42.031304 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 08:52:42.031313 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 08:52:42.031321 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 08:52:42.031330 kernel: ACPI: Interpreter enabled Jul 2 08:52:42.031339 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 08:52:42.031347 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 08:52:42.031356 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 08:52:42.031365 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 08:52:42.031376 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 08:52:42.031527 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 08:52:42.031620 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jul 2 08:52:42.031633 kernel: acpiphp: Slot [3] registered Jul 2 08:52:42.031642 kernel: acpiphp: Slot [4] registered Jul 2 08:52:42.031651 kernel: acpiphp: Slot [5] registered Jul 2 08:52:42.031660 kernel: acpiphp: Slot [6] registered Jul 2 08:52:42.031673 kernel: acpiphp: Slot [7] registered Jul 2 08:52:42.031681 kernel: acpiphp: Slot [8] registered Jul 2 08:52:42.031690 kernel: acpiphp: Slot [9] registered Jul 2 08:52:42.031698 kernel: acpiphp: Slot [10] registered Jul 2 08:52:42.031707 kernel: acpiphp: Slot [11] registered Jul 2 08:52:42.031716 kernel: acpiphp: Slot [12] registered Jul 2 08:52:42.031724 kernel: acpiphp: Slot [13] registered Jul 2 08:52:42.031733 kernel: acpiphp: Slot [14] registered Jul 2 08:52:42.031741 kernel: acpiphp: Slot [15] registered Jul 2 08:52:42.031750 kernel: acpiphp: Slot [16] registered Jul 2 08:52:42.031761 kernel: acpiphp: Slot [17] registered Jul 2 08:52:42.031770 kernel: acpiphp: Slot [18] registered Jul 2 08:52:42.031778 kernel: acpiphp: Slot [19] registered Jul 2 08:52:42.031787 kernel: acpiphp: Slot [20] registered Jul 2 08:52:42.031795 kernel: acpiphp: Slot [21] registered Jul 2 08:52:42.031804 kernel: acpiphp: Slot [22] registered Jul 2 08:52:42.031812 kernel: acpiphp: Slot [23] registered Jul 2 08:52:42.031821 kernel: acpiphp: Slot [24] registered Jul 2 08:52:42.031830 kernel: acpiphp: Slot [25] registered Jul 2 08:52:42.031840 kernel: acpiphp: Slot [26] registered Jul 2 08:52:42.031849 kernel: acpiphp: Slot [27] registered Jul 2 08:52:42.031859 kernel: acpiphp: Slot [28] registered Jul 2 08:52:42.031868 kernel: acpiphp: Slot [29] registered Jul 2 08:52:42.031876 kernel: acpiphp: Slot [30] registered Jul 2 08:52:42.031884 kernel: acpiphp: Slot [31] registered Jul 2 08:52:42.031892 kernel: PCI host bridge to bus 0000:00 Jul 2 08:52:42.031990 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 08:52:42.032065 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 08:52:42.034185 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 08:52:42.034274 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 2 08:52:42.034348 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 2 08:52:42.034422 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 08:52:42.034525 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 08:52:42.034622 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 08:52:42.034721 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 08:52:42.034808 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jul 2 08:52:42.034892 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 08:52:42.034976 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 08:52:42.035059 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 08:52:42.035161 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 08:52:42.035257 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 08:52:42.035349 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 2 08:52:42.035433 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 2 08:52:42.035534 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 2 08:52:42.035621 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 2 08:52:42.035706 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jul 2 08:52:42.035790 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jul 2 08:52:42.035878 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jul 2 08:52:42.035962 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 08:52:42.036054 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 2 08:52:42.036153 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jul 2 08:52:42.036259 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jul 2 08:52:42.036353 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jul 2 08:52:42.036443 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jul 2 08:52:42.036545 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 08:52:42.036638 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 08:52:42.036728 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jul 2 08:52:42.036817 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jul 2 08:52:42.036916 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jul 2 08:52:42.037011 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jul 2 08:52:42.037100 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jul 2 08:52:42.040429 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 08:52:42.040537 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jul 2 08:52:42.040629 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jul 2 08:52:42.040643 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 08:52:42.040652 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 08:52:42.040662 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 08:52:42.040671 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 08:52:42.040680 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 08:52:42.040695 kernel: iommu: Default domain type: Translated Jul 2 08:52:42.040704 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 08:52:42.040796 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 08:52:42.040886 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 08:52:42.040975 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 08:52:42.040988 kernel: vgaarb: loaded Jul 2 08:52:42.040998 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 08:52:42.041007 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 08:52:42.041016 kernel: PTP clock support registered Jul 2 08:52:42.041029 kernel: PCI: Using ACPI for IRQ routing Jul 2 08:52:42.041038 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 08:52:42.041048 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 08:52:42.041057 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Jul 2 08:52:42.041066 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 08:52:42.041075 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 08:52:42.041084 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 08:52:42.041093 kernel: pnp: PnP ACPI init Jul 2 08:52:42.041208 kernel: pnp 00:03: [dma 2] Jul 2 08:52:42.041228 kernel: pnp: PnP ACPI: found 5 devices Jul 2 08:52:42.041237 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 08:52:42.041246 kernel: NET: Registered PF_INET protocol family Jul 2 08:52:42.041256 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 08:52:42.041265 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 08:52:42.041274 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 08:52:42.041289 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 08:52:42.041299 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 2 08:52:42.041311 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 08:52:42.041320 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 08:52:42.041329 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 08:52:42.041338 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 08:52:42.041346 kernel: NET: Registered PF_XDP protocol family Jul 2 08:52:42.041432 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 08:52:42.041518 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 08:52:42.041597 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 08:52:42.041673 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 2 08:52:42.041754 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 2 08:52:42.041844 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 08:52:42.041935 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 08:52:42.042023 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Jul 2 08:52:42.042036 kernel: PCI: CLS 0 bytes, default 64 Jul 2 08:52:42.042046 kernel: Initialise system trusted keyrings Jul 2 08:52:42.042055 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 08:52:42.042068 kernel: Key type asymmetric registered Jul 2 08:52:42.042077 kernel: Asymmetric key parser 'x509' registered Jul 2 08:52:42.042086 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 08:52:42.042095 kernel: io scheduler mq-deadline registered Jul 2 08:52:42.042104 kernel: io scheduler kyber registered Jul 2 08:52:42.042113 kernel: io scheduler bfq registered Jul 2 08:52:42.042122 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 08:52:42.042147 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 2 08:52:42.042157 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 08:52:42.042166 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 2 08:52:42.042178 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 08:52:42.042187 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 08:52:42.042196 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 08:52:42.042205 kernel: random: crng init done Jul 2 08:52:42.042214 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 08:52:42.042223 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 08:52:42.042232 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 08:52:42.042339 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 2 08:52:42.042357 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 08:52:42.042437 kernel: rtc_cmos 00:04: registered as rtc0 Jul 2 08:52:42.042518 kernel: rtc_cmos 00:04: setting system clock to 2024-07-02T08:52:41 UTC (1719910361) Jul 2 08:52:42.042597 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 2 08:52:42.042610 kernel: NET: Registered PF_INET6 protocol family Jul 2 08:52:42.042619 kernel: Segment Routing with IPv6 Jul 2 08:52:42.042628 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 08:52:42.042637 kernel: NET: Registered PF_PACKET protocol family Jul 2 08:52:42.042646 kernel: Key type dns_resolver registered Jul 2 08:52:42.042658 kernel: IPI shorthand broadcast: enabled Jul 2 08:52:42.042667 kernel: sched_clock: Marking stable (726014138, 132809352)->(883463616, -24640126) Jul 2 08:52:42.042677 kernel: registered taskstats version 1 Jul 2 08:52:42.042685 kernel: Loading compiled-in X.509 certificates Jul 2 08:52:42.042694 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 08:52:42.042703 kernel: Key type .fscrypt registered Jul 2 08:52:42.042712 kernel: Key type fscrypt-provisioning registered Jul 2 08:52:42.042722 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 08:52:42.042734 kernel: ima: Allocated hash algorithm: sha1 Jul 2 08:52:42.042743 kernel: ima: No architecture policies found Jul 2 08:52:42.042752 kernel: clk: Disabling unused clocks Jul 2 08:52:42.042761 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 08:52:42.042771 kernel: Write protecting the kernel read-only data: 28672k Jul 2 08:52:42.042780 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 08:52:42.042789 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 08:52:42.042798 kernel: Run /init as init process Jul 2 08:52:42.042807 kernel: with arguments: Jul 2 08:52:42.042819 kernel: /init Jul 2 08:52:42.042828 kernel: with environment: Jul 2 08:52:42.042836 kernel: HOME=/ Jul 2 08:52:42.042844 kernel: TERM=linux Jul 2 08:52:42.042853 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 08:52:42.042865 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 08:52:42.042878 systemd[1]: Detected virtualization kvm. Jul 2 08:52:42.042890 systemd[1]: Detected architecture x86-64. Jul 2 08:52:42.042911 systemd[1]: Running in initrd. Jul 2 08:52:42.042928 systemd[1]: No hostname configured, using default hostname. Jul 2 08:52:42.042937 systemd[1]: Hostname set to . Jul 2 08:52:42.042947 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:52:42.042956 systemd[1]: Queued start job for default target initrd.target. Jul 2 08:52:42.042965 systemd[1]: Started systemd-ask-password-console.path. Jul 2 08:52:42.042974 systemd[1]: Reached target cryptsetup.target. Jul 2 08:52:42.042983 systemd[1]: Reached target paths.target. Jul 2 08:52:42.042996 systemd[1]: Reached target slices.target. Jul 2 08:52:42.043005 systemd[1]: Reached target swap.target. Jul 2 08:52:42.043013 systemd[1]: Reached target timers.target. Jul 2 08:52:42.043022 systemd[1]: Listening on iscsid.socket. Jul 2 08:52:42.043031 systemd[1]: Listening on iscsiuio.socket. Jul 2 08:52:42.043040 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 08:52:42.043050 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 08:52:42.043059 systemd[1]: Listening on systemd-journald.socket. Jul 2 08:52:42.043071 systemd[1]: Listening on systemd-networkd.socket. Jul 2 08:52:42.043080 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 08:52:42.043089 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 08:52:42.043098 systemd[1]: Reached target sockets.target. Jul 2 08:52:42.043124 systemd[1]: Starting kmod-static-nodes.service... Jul 2 08:52:42.043168 systemd[1]: Finished network-cleanup.service. Jul 2 08:52:42.043181 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 08:52:42.043190 systemd[1]: Starting systemd-journald.service... Jul 2 08:52:42.043200 systemd[1]: Starting systemd-modules-load.service... Jul 2 08:52:42.043209 systemd[1]: Starting systemd-resolved.service... Jul 2 08:52:42.043218 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 08:52:42.043227 systemd[1]: Finished kmod-static-nodes.service. Jul 2 08:52:42.043236 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 08:52:42.043251 systemd-journald[185]: Journal started Jul 2 08:52:42.043315 systemd-journald[185]: Runtime Journal (/run/log/journal/42f41e2627f14bfb89b43f899d8948b0) is 4.9M, max 39.5M, 34.5M free. Jul 2 08:52:42.001602 systemd-modules-load[186]: Inserted module 'overlay' Jul 2 08:52:42.067881 systemd[1]: Started systemd-journald.service. Jul 2 08:52:42.067925 kernel: audit: type=1130 audit(1719910362.062:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.055311 systemd-resolved[187]: Positive Trust Anchors: Jul 2 08:52:42.075854 kernel: audit: type=1130 audit(1719910362.068:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.075899 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 08:52:42.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.055322 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:52:42.088346 kernel: audit: type=1130 audit(1719910362.075:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.088394 kernel: audit: type=1130 audit(1719910362.079:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.055361 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 08:52:42.092761 kernel: Bridge firewalling registered Jul 2 08:52:42.058240 systemd-resolved[187]: Defaulting to hostname 'linux'. Jul 2 08:52:42.068369 systemd[1]: Started systemd-resolved.service. Jul 2 08:52:42.075689 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 08:52:42.080411 systemd[1]: Reached target nss-lookup.target. Jul 2 08:52:42.106885 kernel: audit: type=1130 audit(1719910362.102:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.089536 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 08:52:42.090085 systemd-modules-load[186]: Inserted module 'br_netfilter' Jul 2 08:52:42.093968 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 08:52:42.101702 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 08:52:42.113107 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 08:52:42.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.114592 systemd[1]: Starting dracut-cmdline.service... Jul 2 08:52:42.119736 kernel: audit: type=1130 audit(1719910362.113:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.128176 dracut-cmdline[202]: dracut-dracut-053 Jul 2 08:52:42.130911 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 08:52:42.134421 kernel: SCSI subsystem initialized Jul 2 08:52:42.148168 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 08:52:42.151520 kernel: device-mapper: uevent: version 1.0.3 Jul 2 08:52:42.151548 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 08:52:42.157971 systemd-modules-load[186]: Inserted module 'dm_multipath' Jul 2 08:52:42.158896 systemd[1]: Finished systemd-modules-load.service. Jul 2 08:52:42.160812 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:52:42.171292 kernel: audit: type=1130 audit(1719910362.159:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.173516 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:52:42.178535 kernel: audit: type=1130 audit(1719910362.173:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.225198 kernel: Loading iSCSI transport class v2.0-870. Jul 2 08:52:42.248166 kernel: iscsi: registered transport (tcp) Jul 2 08:52:42.277671 kernel: iscsi: registered transport (qla4xxx) Jul 2 08:52:42.277763 kernel: QLogic iSCSI HBA Driver Jul 2 08:52:42.313636 systemd[1]: Finished dracut-cmdline.service. Jul 2 08:52:42.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.315402 systemd[1]: Starting dracut-pre-udev.service... Jul 2 08:52:42.321175 kernel: audit: type=1130 audit(1719910362.314:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.373214 kernel: raid6: sse2x4 gen() 12487 MB/s Jul 2 08:52:42.390172 kernel: raid6: sse2x4 xor() 6763 MB/s Jul 2 08:52:42.407166 kernel: raid6: sse2x2 gen() 14011 MB/s Jul 2 08:52:42.424177 kernel: raid6: sse2x2 xor() 8280 MB/s Jul 2 08:52:42.441168 kernel: raid6: sse2x1 gen() 11034 MB/s Jul 2 08:52:42.458984 kernel: raid6: sse2x1 xor() 6635 MB/s Jul 2 08:52:42.459033 kernel: raid6: using algorithm sse2x2 gen() 14011 MB/s Jul 2 08:52:42.459055 kernel: raid6: .... xor() 8280 MB/s, rmw enabled Jul 2 08:52:42.459958 kernel: raid6: using ssse3x2 recovery algorithm Jul 2 08:52:42.475169 kernel: xor: measuring software checksum speed Jul 2 08:52:42.477742 kernel: prefetch64-sse : 17156 MB/sec Jul 2 08:52:42.477774 kernel: generic_sse : 15719 MB/sec Jul 2 08:52:42.477803 kernel: xor: using function: prefetch64-sse (17156 MB/sec) Jul 2 08:52:42.601252 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 08:52:42.620835 systemd[1]: Finished dracut-pre-udev.service. Jul 2 08:52:42.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.621000 audit: BPF prog-id=7 op=LOAD Jul 2 08:52:42.621000 audit: BPF prog-id=8 op=LOAD Jul 2 08:52:42.622395 systemd[1]: Starting systemd-udevd.service... Jul 2 08:52:42.637252 systemd-udevd[384]: Using default interface naming scheme 'v252'. Jul 2 08:52:42.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.641905 systemd[1]: Started systemd-udevd.service. Jul 2 08:52:42.647478 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 08:52:42.664358 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Jul 2 08:52:42.718224 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 08:52:42.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.722017 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 08:52:42.769716 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 08:52:42.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:42.819194 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Jul 2 08:52:42.857181 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 08:52:42.857280 kernel: GPT:17805311 != 41943039 Jul 2 08:52:42.857295 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 08:52:42.857308 kernel: GPT:17805311 != 41943039 Jul 2 08:52:42.857321 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 08:52:42.857334 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:52:42.862160 kernel: libata version 3.00 loaded. Jul 2 08:52:42.864531 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 08:52:42.866353 kernel: scsi host0: ata_piix Jul 2 08:52:42.867506 kernel: scsi host1: ata_piix Jul 2 08:52:42.867658 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jul 2 08:52:42.867673 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jul 2 08:52:42.897182 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (433) Jul 2 08:52:42.907658 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 08:52:42.944947 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 08:52:42.948761 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 08:52:42.949966 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 08:52:42.954802 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 08:52:42.956766 systemd[1]: Starting disk-uuid.service... Jul 2 08:52:42.970213 disk-uuid[460]: Primary Header is updated. Jul 2 08:52:42.970213 disk-uuid[460]: Secondary Entries is updated. Jul 2 08:52:42.970213 disk-uuid[460]: Secondary Header is updated. Jul 2 08:52:42.981177 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:52:42.994205 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:52:43.996181 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:52:43.997701 disk-uuid[461]: The operation has completed successfully. Jul 2 08:52:44.066420 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 08:52:44.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:44.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:44.066620 systemd[1]: Finished disk-uuid.service. Jul 2 08:52:44.083281 systemd[1]: Starting verity-setup.service... Jul 2 08:52:44.114219 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jul 2 08:52:44.392560 systemd[1]: Found device dev-mapper-usr.device. Jul 2 08:52:44.396007 systemd[1]: Mounting sysusr-usr.mount... Jul 2 08:52:44.404490 systemd[1]: Finished verity-setup.service. Jul 2 08:52:44.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:44.602233 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 08:52:44.602689 systemd[1]: Mounted sysusr-usr.mount. Jul 2 08:52:44.603323 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 08:52:44.604160 systemd[1]: Starting ignition-setup.service... Jul 2 08:52:44.605771 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 08:52:44.658245 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 08:52:44.658349 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:52:44.658378 kernel: BTRFS info (device vda6): has skinny extents Jul 2 08:52:44.775824 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 08:52:44.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:44.778000 audit: BPF prog-id=9 op=LOAD Jul 2 08:52:44.780328 systemd[1]: Starting systemd-networkd.service... Jul 2 08:52:44.837887 systemd-networkd[624]: lo: Link UP Jul 2 08:52:44.837901 systemd-networkd[624]: lo: Gained carrier Jul 2 08:52:44.838486 systemd-networkd[624]: Enumeration completed Jul 2 08:52:44.838566 systemd[1]: Started systemd-networkd.service. Jul 2 08:52:44.838794 systemd-networkd[624]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:52:44.843397 systemd-networkd[624]: eth0: Link UP Jul 2 08:52:44.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:44.843407 systemd-networkd[624]: eth0: Gained carrier Jul 2 08:52:44.845990 systemd[1]: Reached target network.target. Jul 2 08:52:44.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:44.848449 systemd[1]: Starting iscsiuio.service... Jul 2 08:52:44.857068 systemd[1]: Started iscsiuio.service. Jul 2 08:52:44.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:44.866723 iscsid[634]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 08:52:44.866723 iscsid[634]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 08:52:44.866723 iscsid[634]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 08:52:44.866723 iscsid[634]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 08:52:44.866723 iscsid[634]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 08:52:44.866723 iscsid[634]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 08:52:44.866723 iscsid[634]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 08:52:44.858572 systemd[1]: Starting iscsid.service... Jul 2 08:52:44.859294 systemd-networkd[624]: eth0: DHCPv4 address 172.24.4.136/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 2 08:52:44.863403 systemd[1]: Started iscsid.service. Jul 2 08:52:44.865897 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 08:52:44.867196 systemd[1]: Starting dracut-initqueue.service... Jul 2 08:52:44.887239 systemd[1]: Finished dracut-initqueue.service. Jul 2 08:52:44.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:44.888522 systemd[1]: Reached target remote-fs-pre.target. Jul 2 08:52:44.889544 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 08:52:44.890515 systemd[1]: Reached target remote-fs.target. Jul 2 08:52:44.892332 systemd[1]: Starting dracut-pre-mount.service... Jul 2 08:52:44.911065 systemd[1]: Finished dracut-pre-mount.service. Jul 2 08:52:44.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:45.015078 systemd[1]: Finished ignition-setup.service. Jul 2 08:52:45.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:45.019589 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 08:52:45.393850 ignition[649]: Ignition 2.14.0 Jul 2 08:52:45.393893 ignition[649]: Stage: fetch-offline Jul 2 08:52:45.394068 ignition[649]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:52:45.394128 ignition[649]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:52:45.397735 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:52:45.398187 ignition[649]: parsed url from cmdline: "" Jul 2 08:52:45.401891 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 08:52:45.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:45.398205 ignition[649]: no config URL provided Jul 2 08:52:45.398229 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:52:45.405769 systemd[1]: Starting ignition-fetch.service... Jul 2 08:52:45.398263 ignition[649]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:52:45.398281 ignition[649]: failed to fetch config: resource requires networking Jul 2 08:52:45.399254 ignition[649]: Ignition finished successfully Jul 2 08:52:45.422282 ignition[654]: Ignition 2.14.0 Jul 2 08:52:45.422310 ignition[654]: Stage: fetch Jul 2 08:52:45.422519 ignition[654]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:52:45.422556 ignition[654]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:52:45.424530 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:52:45.424709 ignition[654]: parsed url from cmdline: "" Jul 2 08:52:45.424717 ignition[654]: no config URL provided Jul 2 08:52:45.424728 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:52:45.424743 ignition[654]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:52:45.433249 ignition[654]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 2 08:52:45.433300 ignition[654]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 2 08:52:45.435235 ignition[654]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 2 08:52:45.644698 ignition[654]: GET result: OK Jul 2 08:52:45.644812 ignition[654]: parsing config with SHA512: df930b38262c8299aad35947edf9315c2f079cdb921ebf703d31c547a8f49b0a9c652c5553f21b0d41abdfd914522e26e411030584fe2f67861a655ff4a79b2f Jul 2 08:52:45.724381 unknown[654]: fetched base config from "system" Jul 2 08:52:45.725276 unknown[654]: fetched base config from "system" Jul 2 08:52:45.725834 unknown[654]: fetched user config from "openstack" Jul 2 08:52:45.726801 ignition[654]: fetch: fetch complete Jul 2 08:52:45.727321 ignition[654]: fetch: fetch passed Jul 2 08:52:45.727855 ignition[654]: Ignition finished successfully Jul 2 08:52:45.731758 systemd[1]: Finished ignition-fetch.service. Jul 2 08:52:45.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:45.736431 systemd[1]: Starting ignition-kargs.service... Jul 2 08:52:45.754470 ignition[660]: Ignition 2.14.0 Jul 2 08:52:45.754486 ignition[660]: Stage: kargs Jul 2 08:52:45.754604 ignition[660]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:52:45.754625 ignition[660]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:52:45.755593 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:52:45.756483 ignition[660]: kargs: kargs passed Jul 2 08:52:45.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:45.758454 systemd[1]: Finished ignition-kargs.service. Jul 2 08:52:45.756530 ignition[660]: Ignition finished successfully Jul 2 08:52:45.761432 systemd[1]: Starting ignition-disks.service... Jul 2 08:52:45.778822 ignition[666]: Ignition 2.14.0 Jul 2 08:52:45.780593 ignition[666]: Stage: disks Jul 2 08:52:45.782276 ignition[666]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:52:45.784076 ignition[666]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:52:45.786647 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:52:45.791100 ignition[666]: disks: disks passed Jul 2 08:52:45.792498 ignition[666]: Ignition finished successfully Jul 2 08:52:45.795839 systemd[1]: Finished ignition-disks.service. Jul 2 08:52:45.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:45.797384 systemd[1]: Reached target initrd-root-device.target. Jul 2 08:52:45.799516 systemd[1]: Reached target local-fs-pre.target. Jul 2 08:52:45.801785 systemd[1]: Reached target local-fs.target. Jul 2 08:52:45.803894 systemd[1]: Reached target sysinit.target. Jul 2 08:52:45.806961 systemd[1]: Reached target basic.target. Jul 2 08:52:45.812129 systemd[1]: Starting systemd-fsck-root.service... Jul 2 08:52:46.133726 systemd-fsck[674]: ROOT: clean, 614/1628000 files, 124057/1617920 blocks Jul 2 08:52:46.311212 systemd[1]: Finished systemd-fsck-root.service. Jul 2 08:52:46.317879 kernel: kauditd_printk_skb: 21 callbacks suppressed Jul 2 08:52:46.317906 kernel: audit: type=1130 audit(1719910366.312:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:46.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:46.314381 systemd[1]: Mounting sysroot.mount... Jul 2 08:52:46.331157 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 08:52:46.332536 systemd[1]: Mounted sysroot.mount. Jul 2 08:52:46.334227 systemd[1]: Reached target initrd-root-fs.target. Jul 2 08:52:46.338300 systemd[1]: Mounting sysroot-usr.mount... Jul 2 08:52:46.340085 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 08:52:46.341652 systemd[1]: Starting flatcar-openstack-hostname.service... Jul 2 08:52:46.342835 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 08:52:46.342896 systemd[1]: Reached target ignition-diskful.target. Jul 2 08:52:46.347259 systemd[1]: Mounted sysroot-usr.mount. Jul 2 08:52:46.355431 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 08:52:46.361784 systemd[1]: Starting initrd-setup-root.service... Jul 2 08:52:46.368857 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (681) Jul 2 08:52:46.374767 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 08:52:46.374829 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:52:46.374842 kernel: BTRFS info (device vda6): has skinny extents Jul 2 08:52:46.378644 initrd-setup-root[686]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 08:52:46.403756 initrd-setup-root[712]: cut: /sysroot/etc/group: No such file or directory Jul 2 08:52:46.407874 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 08:52:46.419461 initrd-setup-root[720]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 08:52:46.429130 initrd-setup-root[728]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 08:52:46.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:46.502255 systemd[1]: Finished initrd-setup-root.service. Jul 2 08:52:46.517781 kernel: audit: type=1130 audit(1719910366.502:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:46.503778 systemd[1]: Starting ignition-mount.service... Jul 2 08:52:46.505035 systemd[1]: Starting sysroot-boot.service... Jul 2 08:52:46.522873 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 08:52:46.523128 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 08:52:46.541479 ignition[749]: INFO : Ignition 2.14.0 Jul 2 08:52:46.541479 ignition[749]: INFO : Stage: mount Jul 2 08:52:46.542726 ignition[749]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:52:46.542726 ignition[749]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:52:46.542726 ignition[749]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:52:46.546666 ignition[749]: INFO : mount: mount passed Jul 2 08:52:46.546666 ignition[749]: INFO : Ignition finished successfully Jul 2 08:52:46.550971 kernel: audit: type=1130 audit(1719910366.545:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:46.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:46.545159 systemd[1]: Finished ignition-mount.service. Jul 2 08:52:46.553561 systemd-networkd[624]: eth0: Gained IPv6LL Jul 2 08:52:46.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:46.568820 systemd[1]: Finished sysroot-boot.service. Jul 2 08:52:46.573177 kernel: audit: type=1130 audit(1719910366.569:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:46.581517 coreos-metadata[680]: Jul 02 08:52:46.581 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 2 08:52:46.596546 coreos-metadata[680]: Jul 02 08:52:46.596 INFO Fetch successful Jul 2 08:52:46.597931 coreos-metadata[680]: Jul 02 08:52:46.597 INFO wrote hostname ci-3510-3-5-3-6197e17ca9.novalocal to /sysroot/etc/hostname Jul 2 08:52:46.602915 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 2 08:52:46.603114 systemd[1]: Finished flatcar-openstack-hostname.service. Jul 2 08:52:46.622411 kernel: audit: type=1130 audit(1719910366.605:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:46.622466 kernel: audit: type=1131 audit(1719910366.605:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:46.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:46.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:46.607454 systemd[1]: Starting ignition-files.service... Jul 2 08:52:46.632047 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 08:52:46.648201 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (758) Jul 2 08:52:46.657642 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 08:52:46.657706 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:52:46.657743 kernel: BTRFS info (device vda6): has skinny extents Jul 2 08:52:46.669696 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 08:52:46.693335 ignition[777]: INFO : Ignition 2.14.0 Jul 2 08:52:46.693335 ignition[777]: INFO : Stage: files Jul 2 08:52:46.694565 ignition[777]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:52:46.694565 ignition[777]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:52:46.696271 ignition[777]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:52:46.698271 ignition[777]: DEBUG : files: compiled without relabeling support, skipping Jul 2 08:52:46.699018 ignition[777]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 08:52:46.699018 ignition[777]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 08:52:46.702624 ignition[777]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 08:52:46.703467 ignition[777]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 08:52:46.705057 unknown[777]: wrote ssh authorized keys file for user: core Jul 2 08:52:46.706213 ignition[777]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 08:52:46.706917 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jul 2 08:52:46.706917 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 08:52:46.706917 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:52:46.706917 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:52:46.706917 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 08:52:46.706917 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 08:52:46.706917 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 08:52:46.706917 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jul 2 08:52:47.113804 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jul 2 08:52:48.860035 ignition[777]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 08:52:48.860035 ignition[777]: INFO : files: op(7): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 2 08:52:48.860035 ignition[777]: INFO : files: op(7): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 2 08:52:48.860035 ignition[777]: INFO : files: op(8): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 08:52:48.867738 ignition[777]: INFO : files: op(8): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 08:52:48.874693 ignition[777]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:52:48.875706 ignition[777]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:52:48.875706 ignition[777]: INFO : files: files passed Jul 2 08:52:48.875706 ignition[777]: INFO : Ignition finished successfully Jul 2 08:52:48.880654 systemd[1]: Finished ignition-files.service. Jul 2 08:52:48.893070 kernel: audit: type=1130 audit(1719910368.884:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:48.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:48.887288 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 08:52:48.889082 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 08:52:48.890224 systemd[1]: Starting ignition-quench.service... Jul 2 08:52:48.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:48.896848 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 08:52:48.896962 systemd[1]: Finished ignition-quench.service. Jul 2 08:52:48.906438 kernel: audit: type=1130 audit(1719910368.897:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:48.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:48.911160 kernel: audit: type=1131 audit(1719910368.897:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:48.914346 initrd-setup-root-after-ignition[802]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:52:48.916082 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 08:52:48.926733 kernel: audit: type=1130 audit(1719910368.916:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:48.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:48.916820 systemd[1]: Reached target ignition-complete.target. Jul 2 08:52:48.927989 systemd[1]: Starting initrd-parse-etc.service... Jul 2 08:52:48.950633 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 08:52:48.951491 systemd[1]: Finished initrd-parse-etc.service. Jul 2 08:52:48.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:48.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:48.953392 systemd[1]: Reached target initrd-fs.target. Jul 2 08:52:48.954457 systemd[1]: Reached target initrd.target. Jul 2 08:52:48.956071 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 08:52:48.957932 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 08:52:48.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:48.975852 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 08:52:48.978762 systemd[1]: Starting initrd-cleanup.service... Jul 2 08:52:48.997712 systemd[1]: Stopped target nss-lookup.target. Jul 2 08:52:49.000581 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 08:52:49.002865 systemd[1]: Stopped target timers.target. Jul 2 08:52:49.003711 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 08:52:49.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.003906 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 08:52:49.005680 systemd[1]: Stopped target initrd.target. Jul 2 08:52:49.007368 systemd[1]: Stopped target basic.target. Jul 2 08:52:49.008986 systemd[1]: Stopped target ignition-complete.target. Jul 2 08:52:49.010610 systemd[1]: Stopped target ignition-diskful.target. Jul 2 08:52:49.012110 systemd[1]: Stopped target initrd-root-device.target. Jul 2 08:52:49.013526 systemd[1]: Stopped target remote-fs.target. Jul 2 08:52:49.014621 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 08:52:49.015692 systemd[1]: Stopped target sysinit.target. Jul 2 08:52:49.016667 systemd[1]: Stopped target local-fs.target. Jul 2 08:52:49.017711 systemd[1]: Stopped target local-fs-pre.target. Jul 2 08:52:49.018637 systemd[1]: Stopped target swap.target. Jul 2 08:52:49.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.019509 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 08:52:49.019696 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 08:52:49.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.020670 systemd[1]: Stopped target cryptsetup.target. Jul 2 08:52:49.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.021447 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 08:52:49.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.021596 systemd[1]: Stopped dracut-initqueue.service. Jul 2 08:52:49.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.022504 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 08:52:49.022664 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 08:52:49.023375 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 08:52:49.023550 systemd[1]: Stopped ignition-files.service. Jul 2 08:52:49.025430 systemd[1]: Stopping ignition-mount.service... Jul 2 08:52:49.027740 systemd[1]: Stopping sysroot-boot.service... Jul 2 08:52:49.028266 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 08:52:49.028396 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 08:52:49.029225 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 08:52:49.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.029529 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 08:52:49.039951 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 08:52:49.040050 systemd[1]: Finished initrd-cleanup.service. Jul 2 08:52:49.049084 ignition[815]: INFO : Ignition 2.14.0 Jul 2 08:52:49.049084 ignition[815]: INFO : Stage: umount Jul 2 08:52:49.049084 ignition[815]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 08:52:49.049084 ignition[815]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Jul 2 08:52:49.049084 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 2 08:52:49.049084 ignition[815]: INFO : umount: umount passed Jul 2 08:52:49.049084 ignition[815]: INFO : Ignition finished successfully Jul 2 08:52:49.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.051127 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 08:52:49.051256 systemd[1]: Stopped ignition-mount.service. Jul 2 08:52:49.052868 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 08:52:49.053015 systemd[1]: Stopped ignition-disks.service. Jul 2 08:52:49.054396 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 08:52:49.054473 systemd[1]: Stopped ignition-kargs.service. Jul 2 08:52:49.055613 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 08:52:49.055685 systemd[1]: Stopped ignition-fetch.service. Jul 2 08:52:49.056543 systemd[1]: Stopped target network.target. Jul 2 08:52:49.059990 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 08:52:49.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.060067 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 08:52:49.061282 systemd[1]: Stopped target paths.target. Jul 2 08:52:49.062440 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 08:52:49.065181 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 08:52:49.066397 systemd[1]: Stopped target slices.target. Jul 2 08:52:49.067849 systemd[1]: Stopped target sockets.target. Jul 2 08:52:49.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.069219 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 08:52:49.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.069250 systemd[1]: Closed iscsid.socket. Jul 2 08:52:49.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.090000 audit: BPF prog-id=6 op=UNLOAD Jul 2 08:52:49.070306 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 08:52:49.070333 systemd[1]: Closed iscsiuio.socket. Jul 2 08:52:49.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.071240 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 08:52:49.071282 systemd[1]: Stopped ignition-setup.service. Jul 2 08:52:49.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.072537 systemd[1]: Stopping systemd-networkd.service... Jul 2 08:52:49.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.074642 systemd[1]: Stopping systemd-resolved.service... Jul 2 08:52:49.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.077395 systemd-networkd[624]: eth0: DHCPv6 lease lost Jul 2 08:52:49.103000 audit: BPF prog-id=9 op=UNLOAD Jul 2 08:52:49.079828 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 08:52:49.080768 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 08:52:49.080867 systemd[1]: Stopped systemd-networkd.service. Jul 2 08:52:49.086575 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 08:52:49.086737 systemd[1]: Stopped systemd-resolved.service. Jul 2 08:52:49.089083 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 08:52:49.089214 systemd[1]: Stopped sysroot-boot.service. Jul 2 08:52:49.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.089929 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 08:52:49.089978 systemd[1]: Closed systemd-networkd.socket. Jul 2 08:52:49.091654 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 08:52:49.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.091696 systemd[1]: Stopped initrd-setup-root.service. Jul 2 08:52:49.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.094107 systemd[1]: Stopping network-cleanup.service... Jul 2 08:52:49.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.094680 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 08:52:49.094737 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 08:52:49.097711 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:52:49.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.097771 systemd[1]: Stopped systemd-sysctl.service. Jul 2 08:52:49.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.098827 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 08:52:49.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.098869 systemd[1]: Stopped systemd-modules-load.service. Jul 2 08:52:49.104129 systemd[1]: Stopping systemd-udevd.service... Jul 2 08:52:49.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.107720 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 08:52:49.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.108344 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 08:52:49.108501 systemd[1]: Stopped systemd-udevd.service. Jul 2 08:52:49.111060 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 08:52:49.111102 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 08:52:49.112258 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 08:52:49.112313 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 08:52:49.112758 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 08:52:49.112825 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 08:52:49.113447 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 08:52:49.113498 systemd[1]: Stopped dracut-cmdline.service. Jul 2 08:52:49.114803 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:52:49.114842 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 08:52:49.116718 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 08:52:49.125045 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 08:52:49.125103 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 08:52:49.126479 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 08:52:49.126520 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 08:52:49.127274 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:52:49.127313 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 08:52:49.129338 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 08:52:49.129863 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 08:52:49.129950 systemd[1]: Stopped network-cleanup.service. Jul 2 08:52:49.130719 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 08:52:49.130807 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 08:52:49.131648 systemd[1]: Reached target initrd-switch-root.target. Jul 2 08:52:49.133556 systemd[1]: Starting initrd-switch-root.service... Jul 2 08:52:49.150327 systemd[1]: Switching root. Jul 2 08:52:49.167351 iscsid[634]: iscsid shutting down. Jul 2 08:52:49.167990 systemd-journald[185]: Journal stopped Jul 2 08:52:53.758573 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Jul 2 08:52:53.758630 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 08:52:53.758644 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 08:52:53.758660 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 08:52:53.758672 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 08:52:53.758682 kernel: SELinux: policy capability open_perms=1 Jul 2 08:52:53.758696 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 08:52:53.758707 kernel: SELinux: policy capability always_check_network=0 Jul 2 08:52:53.758717 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 08:52:53.758728 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 08:52:53.758743 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 08:52:53.758753 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 08:52:53.758764 systemd[1]: Successfully loaded SELinux policy in 99.235ms. Jul 2 08:52:53.758792 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.341ms. Jul 2 08:52:53.758806 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 08:52:53.758820 systemd[1]: Detected virtualization kvm. Jul 2 08:52:53.758831 systemd[1]: Detected architecture x86-64. Jul 2 08:52:53.758843 systemd[1]: Detected first boot. Jul 2 08:52:53.758859 systemd[1]: Hostname set to . Jul 2 08:52:53.758871 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:52:53.758882 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 08:52:53.758894 systemd[1]: Populated /etc with preset unit settings. Jul 2 08:52:53.758907 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:52:53.758920 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:52:53.758933 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:52:53.758946 kernel: kauditd_printk_skb: 53 callbacks suppressed Jul 2 08:52:53.758957 kernel: audit: type=1334 audit(1719910373.505:88): prog-id=12 op=LOAD Jul 2 08:52:53.758968 kernel: audit: type=1334 audit(1719910373.505:89): prog-id=3 op=UNLOAD Jul 2 08:52:53.758978 kernel: audit: type=1334 audit(1719910373.507:90): prog-id=13 op=LOAD Jul 2 08:52:53.758992 kernel: audit: type=1334 audit(1719910373.510:91): prog-id=14 op=LOAD Jul 2 08:52:53.759007 kernel: audit: type=1334 audit(1719910373.510:92): prog-id=4 op=UNLOAD Jul 2 08:52:53.759018 kernel: audit: type=1334 audit(1719910373.510:93): prog-id=5 op=UNLOAD Jul 2 08:52:53.759028 kernel: audit: type=1334 audit(1719910373.514:94): prog-id=15 op=LOAD Jul 2 08:52:53.759039 kernel: audit: type=1334 audit(1719910373.514:95): prog-id=12 op=UNLOAD Jul 2 08:52:53.759049 kernel: audit: type=1334 audit(1719910373.517:96): prog-id=16 op=LOAD Jul 2 08:52:53.759060 kernel: audit: type=1334 audit(1719910373.520:97): prog-id=17 op=LOAD Jul 2 08:52:53.759071 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 08:52:53.759082 systemd[1]: Stopped iscsiuio.service. Jul 2 08:52:53.759095 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 08:52:53.759107 systemd[1]: Stopped iscsid.service. Jul 2 08:52:53.759118 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 08:52:53.759129 systemd[1]: Stopped initrd-switch-root.service. Jul 2 08:52:53.759170 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 08:52:53.759183 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 08:52:53.759197 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 08:52:53.759210 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 2 08:52:53.759221 systemd[1]: Created slice system-getty.slice. Jul 2 08:52:53.759232 systemd[1]: Created slice system-modprobe.slice. Jul 2 08:52:53.759244 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 08:52:53.759258 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 08:52:53.759269 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 08:52:53.759281 systemd[1]: Created slice user.slice. Jul 2 08:52:53.759293 systemd[1]: Started systemd-ask-password-console.path. Jul 2 08:52:53.759304 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 08:52:53.759315 systemd[1]: Set up automount boot.automount. Jul 2 08:52:53.759329 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 08:52:53.759340 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 08:52:53.759352 systemd[1]: Stopped target initrd-fs.target. Jul 2 08:52:53.759365 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 08:52:53.759377 systemd[1]: Reached target integritysetup.target. Jul 2 08:52:53.759388 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 08:52:53.759400 systemd[1]: Reached target remote-fs.target. Jul 2 08:52:53.759411 systemd[1]: Reached target slices.target. Jul 2 08:52:53.759422 systemd[1]: Reached target swap.target. Jul 2 08:52:53.759433 systemd[1]: Reached target torcx.target. Jul 2 08:52:53.759444 systemd[1]: Reached target veritysetup.target. Jul 2 08:52:53.759456 systemd[1]: Listening on systemd-coredump.socket. Jul 2 08:52:53.759467 systemd[1]: Listening on systemd-initctl.socket. Jul 2 08:52:53.759480 systemd[1]: Listening on systemd-networkd.socket. Jul 2 08:52:53.759492 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 08:52:53.759503 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 08:52:53.759514 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 08:52:53.759526 systemd[1]: Mounting dev-hugepages.mount... Jul 2 08:52:53.759537 systemd[1]: Mounting dev-mqueue.mount... Jul 2 08:52:53.759548 systemd[1]: Mounting media.mount... Jul 2 08:52:53.759559 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:52:53.759571 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 08:52:53.759585 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 08:52:53.759596 systemd[1]: Mounting tmp.mount... Jul 2 08:52:53.759607 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 08:52:53.759619 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:52:53.759630 systemd[1]: Starting kmod-static-nodes.service... Jul 2 08:52:53.759642 systemd[1]: Starting modprobe@configfs.service... Jul 2 08:52:53.759653 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:52:53.759665 systemd[1]: Starting modprobe@drm.service... Jul 2 08:52:53.759676 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:52:53.759690 systemd[1]: Starting modprobe@fuse.service... Jul 2 08:52:53.759701 systemd[1]: Starting modprobe@loop.service... Jul 2 08:52:53.759713 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 08:52:53.759724 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 08:52:53.759736 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 08:52:53.759748 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 08:52:53.759759 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 08:52:53.759771 systemd[1]: Stopped systemd-journald.service. Jul 2 08:52:53.759782 systemd[1]: Starting systemd-journald.service... Jul 2 08:52:53.759795 systemd[1]: Starting systemd-modules-load.service... Jul 2 08:52:53.759806 systemd[1]: Starting systemd-network-generator.service... Jul 2 08:52:53.759817 systemd[1]: Starting systemd-remount-fs.service... Jul 2 08:52:53.759828 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 08:52:53.759839 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 08:52:53.759851 systemd[1]: Stopped verity-setup.service. Jul 2 08:52:53.759863 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:52:53.759875 systemd[1]: Mounted dev-hugepages.mount. Jul 2 08:52:53.759886 systemd[1]: Mounted dev-mqueue.mount. Jul 2 08:52:53.759899 systemd[1]: Mounted media.mount. Jul 2 08:52:53.759910 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 08:52:53.759921 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 08:52:53.759933 systemd[1]: Mounted tmp.mount. Jul 2 08:52:53.759944 systemd[1]: Finished kmod-static-nodes.service. Jul 2 08:52:53.759958 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 08:52:53.759969 systemd[1]: Finished modprobe@configfs.service. Jul 2 08:52:53.759980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:52:53.759992 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:52:53.760003 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:52:53.760017 systemd-journald[914]: Journal started Jul 2 08:52:53.760058 systemd-journald[914]: Runtime Journal (/run/log/journal/42f41e2627f14bfb89b43f899d8948b0) is 4.9M, max 39.5M, 34.5M free. Jul 2 08:52:49.466000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 08:52:49.558000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 08:52:49.558000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 08:52:49.559000 audit: BPF prog-id=10 op=LOAD Jul 2 08:52:49.559000 audit: BPF prog-id=10 op=UNLOAD Jul 2 08:52:49.559000 audit: BPF prog-id=11 op=LOAD Jul 2 08:52:49.559000 audit: BPF prog-id=11 op=UNLOAD Jul 2 08:52:49.726000 audit[847]: AVC avc: denied { associate } for pid=847 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 08:52:49.726000 audit[847]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=830 pid=847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:52:49.726000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 08:52:49.731000 audit[847]: AVC avc: denied { associate } for pid=847 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 08:52:49.731000 audit[847]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=830 pid=847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:52:49.731000 audit: CWD cwd="/" Jul 2 08:52:49.731000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:49.731000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:49.731000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 08:52:53.505000 audit: BPF prog-id=12 op=LOAD Jul 2 08:52:53.505000 audit: BPF prog-id=3 op=UNLOAD Jul 2 08:52:53.507000 audit: BPF prog-id=13 op=LOAD Jul 2 08:52:53.510000 audit: BPF prog-id=14 op=LOAD Jul 2 08:52:53.510000 audit: BPF prog-id=4 op=UNLOAD Jul 2 08:52:53.510000 audit: BPF prog-id=5 op=UNLOAD Jul 2 08:52:53.514000 audit: BPF prog-id=15 op=LOAD Jul 2 08:52:53.514000 audit: BPF prog-id=12 op=UNLOAD Jul 2 08:52:53.517000 audit: BPF prog-id=16 op=LOAD Jul 2 08:52:53.520000 audit: BPF prog-id=17 op=LOAD Jul 2 08:52:53.520000 audit: BPF prog-id=13 op=UNLOAD Jul 2 08:52:53.520000 audit: BPF prog-id=14 op=UNLOAD Jul 2 08:52:53.523000 audit: BPF prog-id=18 op=LOAD Jul 2 08:52:53.523000 audit: BPF prog-id=15 op=UNLOAD Jul 2 08:52:53.527000 audit: BPF prog-id=19 op=LOAD Jul 2 08:52:53.539000 audit: BPF prog-id=20 op=LOAD Jul 2 08:52:53.539000 audit: BPF prog-id=16 op=UNLOAD Jul 2 08:52:53.539000 audit: BPF prog-id=17 op=UNLOAD Jul 2 08:52:53.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.550000 audit: BPF prog-id=18 op=UNLOAD Jul 2 08:52:53.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.688000 audit: BPF prog-id=21 op=LOAD Jul 2 08:52:53.688000 audit: BPF prog-id=22 op=LOAD Jul 2 08:52:53.689000 audit: BPF prog-id=23 op=LOAD Jul 2 08:52:53.689000 audit: BPF prog-id=19 op=UNLOAD Jul 2 08:52:53.689000 audit: BPF prog-id=20 op=UNLOAD Jul 2 08:52:53.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.754000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 08:52:53.754000 audit[914]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fff70c9c6e0 a2=4000 a3=7fff70c9c77c items=0 ppid=1 pid=914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:52:53.754000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 08:52:53.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.717836 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:52:53.503105 systemd[1]: Queued start job for default target multi-user.target. Jul 2 08:52:49.719338 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:49Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 08:52:53.503123 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 2 08:52:53.764306 systemd[1]: Finished modprobe@drm.service. Jul 2 08:52:49.719362 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:49Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 08:52:53.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.541018 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 08:52:49.719416 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:49Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 08:52:49.719428 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:49Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 08:52:49.719463 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:49Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 08:52:49.719477 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:49Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 08:52:49.719709 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:49Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 08:52:49.719752 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:49Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 08:52:49.719767 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:49Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 08:52:49.722243 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:49Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 08:52:49.722285 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:49Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 08:52:53.767007 systemd[1]: Started systemd-journald.service. Jul 2 08:52:53.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:49.722308 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:49Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 08:52:53.768197 kernel: loop: module loaded Jul 2 08:52:49.722327 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:49Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 08:52:49.722349 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:49Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 08:52:49.722366 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:49Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 08:52:53.076798 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:53Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:52:53.077107 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:53Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:52:53.077264 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:53Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:52:53.077502 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:53Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 08:52:53.077564 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:53Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 08:52:53.077637 /usr/lib/systemd/system-generators/torcx-generator[847]: time="2024-07-02T08:52:53Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 08:52:53.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.769792 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:52:53.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.770024 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:52:53.771089 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:52:53.771316 systemd[1]: Finished modprobe@loop.service. Jul 2 08:52:53.772394 systemd[1]: Finished systemd-network-generator.service. Jul 2 08:52:53.774694 systemd[1]: Finished systemd-remount-fs.service. Jul 2 08:52:53.776958 kernel: fuse: init (API version 7.34) Jul 2 08:52:53.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.775685 systemd[1]: Reached target network-pre.target. Jul 2 08:52:53.778162 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 08:52:53.778621 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 08:52:53.781386 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 08:52:53.783422 systemd[1]: Starting systemd-journal-flush.service... Jul 2 08:52:53.784058 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:52:53.785588 systemd[1]: Starting systemd-random-seed.service... Jul 2 08:52:53.786208 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:52:53.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.794113 systemd-journald[914]: Time spent on flushing to /var/log/journal/42f41e2627f14bfb89b43f899d8948b0 is 38.336ms for 1088 entries. Jul 2 08:52:53.794113 systemd-journald[914]: System Journal (/var/log/journal/42f41e2627f14bfb89b43f899d8948b0) is 8.0M, max 584.8M, 576.8M free. Jul 2 08:52:53.852155 systemd-journald[914]: Received client request to flush runtime journal. Jul 2 08:52:53.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.788826 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 08:52:53.789088 systemd[1]: Finished modprobe@fuse.service. Jul 2 08:52:53.790726 systemd[1]: Finished systemd-modules-load.service. Jul 2 08:52:53.791480 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 08:52:53.796931 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 08:52:53.798700 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:52:53.800336 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 08:52:53.808382 systemd[1]: Finished systemd-random-seed.service. Jul 2 08:52:53.809024 systemd[1]: Reached target first-boot-complete.target. Jul 2 08:52:53.839339 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:52:53.853276 systemd[1]: Finished systemd-journal-flush.service. Jul 2 08:52:53.866949 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 08:52:53.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.869045 systemd[1]: Starting systemd-udev-settle.service... Jul 2 08:52:53.878990 udevadm[952]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 08:52:53.886605 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 08:52:53.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.888792 systemd[1]: Starting systemd-sysusers.service... Jul 2 08:52:53.933943 systemd[1]: Finished systemd-sysusers.service. Jul 2 08:52:53.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:53.935664 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 08:52:53.980557 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 08:52:53.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:54.443461 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 08:52:54.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:54.445000 audit: BPF prog-id=24 op=LOAD Jul 2 08:52:54.446000 audit: BPF prog-id=25 op=LOAD Jul 2 08:52:54.446000 audit: BPF prog-id=7 op=UNLOAD Jul 2 08:52:54.446000 audit: BPF prog-id=8 op=UNLOAD Jul 2 08:52:54.447551 systemd[1]: Starting systemd-udevd.service... Jul 2 08:52:54.497951 systemd-udevd[961]: Using default interface naming scheme 'v252'. Jul 2 08:52:54.568033 systemd[1]: Started systemd-udevd.service. Jul 2 08:52:54.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:54.572000 audit: BPF prog-id=26 op=LOAD Jul 2 08:52:54.576520 systemd[1]: Starting systemd-networkd.service... Jul 2 08:52:54.594000 audit: BPF prog-id=27 op=LOAD Jul 2 08:52:54.594000 audit: BPF prog-id=28 op=LOAD Jul 2 08:52:54.595000 audit: BPF prog-id=29 op=LOAD Jul 2 08:52:54.597914 systemd[1]: Starting systemd-userdbd.service... Jul 2 08:52:54.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:54.645791 systemd[1]: Started systemd-userdbd.service. Jul 2 08:52:54.664475 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 08:52:54.766223 systemd-networkd[970]: lo: Link UP Jul 2 08:52:54.766235 systemd-networkd[970]: lo: Gained carrier Jul 2 08:52:54.766728 systemd-networkd[970]: Enumeration completed Jul 2 08:52:54.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:54.766834 systemd[1]: Started systemd-networkd.service. Jul 2 08:52:54.767098 systemd-networkd[970]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:52:54.770150 systemd-networkd[970]: eth0: Link UP Jul 2 08:52:54.770161 systemd-networkd[970]: eth0: Gained carrier Jul 2 08:52:54.777281 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 2 08:52:54.783395 systemd-networkd[970]: eth0: DHCPv4 address 172.24.4.136/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 2 08:52:54.796212 kernel: ACPI: button: Power Button [PWRF] Jul 2 08:52:54.774000 audit[972]: AVC avc: denied { confidentiality } for pid=972 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 08:52:54.774000 audit[972]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=564b81202ec0 a1=3207c a2=7fb882638bc5 a3=5 items=108 ppid=961 pid=972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:52:54.774000 audit: CWD cwd="/" Jul 2 08:52:54.774000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=1 name=(null) inode=14598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=2 name=(null) inode=14598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=3 name=(null) inode=14599 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=4 name=(null) inode=14598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=5 name=(null) inode=14600 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=6 name=(null) inode=14598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=7 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=8 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=9 name=(null) inode=14602 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=10 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=11 name=(null) inode=14603 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=12 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=13 name=(null) inode=14604 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=14 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=15 name=(null) inode=14605 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=16 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=17 name=(null) inode=14606 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=18 name=(null) inode=14598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=19 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=20 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=21 name=(null) inode=14608 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=22 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=23 name=(null) inode=14609 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=24 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=25 name=(null) inode=14610 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=26 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=27 name=(null) inode=14611 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=28 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=29 name=(null) inode=14612 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=30 name=(null) inode=14598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=31 name=(null) inode=14613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=32 name=(null) inode=14613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=33 name=(null) inode=14614 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=34 name=(null) inode=14613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=35 name=(null) inode=14615 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=36 name=(null) inode=14613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=37 name=(null) inode=14616 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=38 name=(null) inode=14613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=39 name=(null) inode=14617 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=40 name=(null) inode=14613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=41 name=(null) inode=14618 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=42 name=(null) inode=14598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=43 name=(null) inode=14619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=44 name=(null) inode=14619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=45 name=(null) inode=14620 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=46 name=(null) inode=14619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=47 name=(null) inode=14621 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=48 name=(null) inode=14619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=49 name=(null) inode=14622 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=50 name=(null) inode=14619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=51 name=(null) inode=14623 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=52 name=(null) inode=14619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=53 name=(null) inode=14624 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=55 name=(null) inode=14625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=56 name=(null) inode=14625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=57 name=(null) inode=14626 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=58 name=(null) inode=14625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=59 name=(null) inode=14627 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=60 name=(null) inode=14625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=61 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=62 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=63 name=(null) inode=14629 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=64 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=65 name=(null) inode=14630 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=66 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=67 name=(null) inode=14631 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=68 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=69 name=(null) inode=14632 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=70 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=71 name=(null) inode=14633 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=72 name=(null) inode=14625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=73 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=74 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=75 name=(null) inode=14635 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=76 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=77 name=(null) inode=14636 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=78 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=79 name=(null) inode=14637 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=80 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=81 name=(null) inode=14638 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=82 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=83 name=(null) inode=14639 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=84 name=(null) inode=14625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=85 name=(null) inode=14640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=86 name=(null) inode=14640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=87 name=(null) inode=14641 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=88 name=(null) inode=14640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=89 name=(null) inode=14642 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=90 name=(null) inode=14640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=91 name=(null) inode=14643 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=92 name=(null) inode=14640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=93 name=(null) inode=14644 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=94 name=(null) inode=14640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=95 name=(null) inode=14645 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=96 name=(null) inode=14625 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=97 name=(null) inode=14646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=98 name=(null) inode=14646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=99 name=(null) inode=14647 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=100 name=(null) inode=14646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=101 name=(null) inode=14648 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=102 name=(null) inode=14646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=103 name=(null) inode=14649 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=104 name=(null) inode=14646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=105 name=(null) inode=14650 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=106 name=(null) inode=14646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PATH item=107 name=(null) inode=14651 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 08:52:54.774000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 08:52:54.799254 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 08:52:54.818212 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 2 08:52:54.824244 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 2 08:52:54.832170 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 08:52:54.884616 systemd[1]: Finished systemd-udev-settle.service. Jul 2 08:52:54.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:54.886503 systemd[1]: Starting lvm2-activation-early.service... Jul 2 08:52:54.925551 lvm[990]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:52:54.967513 systemd[1]: Finished lvm2-activation-early.service. Jul 2 08:52:54.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:54.968229 systemd[1]: Reached target cryptsetup.target. Jul 2 08:52:54.969965 systemd[1]: Starting lvm2-activation.service... Jul 2 08:52:54.978678 lvm[991]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:52:55.017008 systemd[1]: Finished lvm2-activation.service. Jul 2 08:52:55.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:55.018471 systemd[1]: Reached target local-fs-pre.target. Jul 2 08:52:55.019646 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 08:52:55.019710 systemd[1]: Reached target local-fs.target. Jul 2 08:52:55.020838 systemd[1]: Reached target machines.target. Jul 2 08:52:55.024727 systemd[1]: Starting ldconfig.service... Jul 2 08:52:55.027848 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:52:55.027942 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:52:55.031079 systemd[1]: Starting systemd-boot-update.service... Jul 2 08:52:55.034892 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 08:52:55.040229 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 08:52:55.052404 systemd[1]: Starting systemd-sysext.service... Jul 2 08:52:55.055252 systemd[1]: boot.automount: Got automount request for /boot, triggered by 993 (bootctl) Jul 2 08:52:55.061655 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 08:52:55.091734 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 08:52:55.098804 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 08:52:55.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:55.105601 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 08:52:55.105796 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 08:52:55.167277 kernel: loop0: detected capacity change from 0 to 211296 Jul 2 08:52:55.842026 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 08:52:55.847616 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 08:52:55.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:55.898514 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 08:52:55.935223 kernel: loop1: detected capacity change from 0 to 211296 Jul 2 08:52:55.976321 systemd-fsck[1005]: fsck.fat 4.2 (2021-01-31) Jul 2 08:52:55.976321 systemd-fsck[1005]: /dev/vda1: 789 files, 119238/258078 clusters Jul 2 08:52:55.979800 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 08:52:55.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:55.984286 systemd[1]: Mounting boot.mount... Jul 2 08:52:56.002027 (sd-sysext)[1008]: Using extensions 'kubernetes'. Jul 2 08:52:56.003631 (sd-sysext)[1008]: Merged extensions into '/usr'. Jul 2 08:52:56.040672 systemd[1]: Mounted boot.mount. Jul 2 08:52:56.050896 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:52:56.056489 systemd[1]: Mounting usr-share-oem.mount... Jul 2 08:52:56.059986 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:52:56.064531 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:52:56.072992 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:52:56.076066 systemd[1]: Starting modprobe@loop.service... Jul 2 08:52:56.077567 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:52:56.077710 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:52:56.077978 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:52:56.082727 systemd[1]: Finished systemd-boot-update.service. Jul 2 08:52:56.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.083541 systemd[1]: Mounted usr-share-oem.mount. Jul 2 08:52:56.084464 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:52:56.084599 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:52:56.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.085763 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:52:56.085884 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:52:56.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.086825 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:52:56.086949 systemd[1]: Finished modprobe@loop.service. Jul 2 08:52:56.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.087953 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:52:56.088080 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:52:56.089212 systemd[1]: Finished systemd-sysext.service. Jul 2 08:52:56.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.091128 systemd[1]: Starting ensure-sysext.service... Jul 2 08:52:56.093480 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 08:52:56.101072 systemd[1]: Reloading. Jul 2 08:52:56.107550 systemd-tmpfiles[1016]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 08:52:56.108585 systemd-tmpfiles[1016]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 08:52:56.111510 systemd-tmpfiles[1016]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 08:52:56.224962 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-07-02T08:52:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:52:56.225764 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-07-02T08:52:56Z" level=info msg="torcx already run" Jul 2 08:52:56.280544 systemd-networkd[970]: eth0: Gained IPv6LL Jul 2 08:52:56.341883 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:52:56.342076 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:52:56.373672 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:52:56.440212 ldconfig[992]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 08:52:56.461000 audit: BPF prog-id=30 op=LOAD Jul 2 08:52:56.461000 audit: BPF prog-id=21 op=UNLOAD Jul 2 08:52:56.461000 audit: BPF prog-id=31 op=LOAD Jul 2 08:52:56.461000 audit: BPF prog-id=32 op=LOAD Jul 2 08:52:56.461000 audit: BPF prog-id=22 op=UNLOAD Jul 2 08:52:56.461000 audit: BPF prog-id=23 op=UNLOAD Jul 2 08:52:56.462000 audit: BPF prog-id=33 op=LOAD Jul 2 08:52:56.462000 audit: BPF prog-id=26 op=UNLOAD Jul 2 08:52:56.463000 audit: BPF prog-id=34 op=LOAD Jul 2 08:52:56.463000 audit: BPF prog-id=27 op=UNLOAD Jul 2 08:52:56.464000 audit: BPF prog-id=35 op=LOAD Jul 2 08:52:56.464000 audit: BPF prog-id=36 op=LOAD Jul 2 08:52:56.464000 audit: BPF prog-id=28 op=UNLOAD Jul 2 08:52:56.464000 audit: BPF prog-id=29 op=UNLOAD Jul 2 08:52:56.468000 audit: BPF prog-id=37 op=LOAD Jul 2 08:52:56.468000 audit: BPF prog-id=38 op=LOAD Jul 2 08:52:56.468000 audit: BPF prog-id=24 op=UNLOAD Jul 2 08:52:56.468000 audit: BPF prog-id=25 op=UNLOAD Jul 2 08:52:56.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.471104 systemd[1]: Finished ldconfig.service. Jul 2 08:52:56.471935 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 08:52:56.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.477009 systemd[1]: Starting audit-rules.service... Jul 2 08:52:56.478864 systemd[1]: Starting clean-ca-certificates.service... Jul 2 08:52:56.480987 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 08:52:56.485000 audit: BPF prog-id=39 op=LOAD Jul 2 08:52:56.486861 systemd[1]: Starting systemd-resolved.service... Jul 2 08:52:56.489000 audit: BPF prog-id=40 op=LOAD Jul 2 08:52:56.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.490215 systemd[1]: Starting systemd-timesyncd.service... Jul 2 08:52:56.492416 systemd[1]: Starting systemd-update-utmp.service... Jul 2 08:52:56.494121 systemd[1]: Finished clean-ca-certificates.service. Jul 2 08:52:56.495423 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:52:56.505000 audit[1090]: SYSTEM_BOOT pid=1090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.509046 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:52:56.509341 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:52:56.511419 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:52:56.513711 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:52:56.516113 systemd[1]: Starting modprobe@loop.service... Jul 2 08:52:56.517121 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:52:56.517294 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:52:56.517436 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:52:56.517540 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:52:56.518679 systemd[1]: Finished systemd-update-utmp.service. Jul 2 08:52:56.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.520392 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:52:56.520522 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:52:56.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.522713 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:52:56.522836 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:52:56.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.523746 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:52:56.523861 systemd[1]: Finished modprobe@loop.service. Jul 2 08:52:56.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.526496 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:52:56.526623 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:52:56.529411 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:52:56.529623 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:52:56.531813 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:52:56.534516 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:52:56.536497 systemd[1]: Starting modprobe@loop.service... Jul 2 08:52:56.537393 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:52:56.537537 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:52:56.537656 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:52:56.537749 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:52:56.539307 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:52:56.539629 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:52:56.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.541683 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:52:56.541817 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:52:56.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.542641 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:52:56.546994 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:52:56.547155 systemd[1]: Finished modprobe@loop.service. Jul 2 08:52:56.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.548076 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:52:56.548397 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 08:52:56.550419 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 08:52:56.553771 systemd[1]: Starting modprobe@drm.service... Jul 2 08:52:56.556463 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 08:52:56.560332 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 08:52:56.560509 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:52:56.562384 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 08:52:56.563043 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:52:56.563211 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 08:52:56.564540 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:52:56.564698 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 08:52:56.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.565657 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:52:56.565785 systemd[1]: Finished modprobe@drm.service. Jul 2 08:52:56.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.567575 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 08:52:56.570253 systemd[1]: Finished ensure-sysext.service. Jul 2 08:52:56.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.581239 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:52:56.581384 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 08:52:56.582005 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:52:56.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.582585 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 08:52:56.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.587383 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 08:52:56.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.589462 systemd[1]: Starting systemd-update-done.service... Jul 2 08:52:56.601031 systemd[1]: Finished systemd-update-done.service. Jul 2 08:52:56.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 08:52:56.606566 augenrules[1116]: No rules Jul 2 08:52:56.606000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 08:52:56.606000 audit[1116]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd96012630 a2=420 a3=0 items=0 ppid=1084 pid=1116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 08:52:56.606000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 08:52:56.609395 systemd[1]: Finished audit-rules.service. Jul 2 08:52:56.634930 systemd-resolved[1087]: Positive Trust Anchors: Jul 2 08:52:56.635508 systemd-resolved[1087]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:52:56.635607 systemd-resolved[1087]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 08:52:56.642310 systemd[1]: Started systemd-timesyncd.service. Jul 2 08:52:56.642907 systemd[1]: Reached target time-set.target. Jul 2 08:52:56.643500 systemd-resolved[1087]: Using system hostname 'ci-3510-3-5-3-6197e17ca9.novalocal'. Jul 2 08:52:56.646472 systemd[1]: Started systemd-resolved.service. Jul 2 08:52:56.646982 systemd[1]: Reached target network.target. Jul 2 08:52:56.647454 systemd[1]: Reached target network-online.target. Jul 2 08:52:56.647912 systemd[1]: Reached target nss-lookup.target. Jul 2 08:52:56.648440 systemd[1]: Reached target sysinit.target. Jul 2 08:52:56.648990 systemd[1]: Started motdgen.path. Jul 2 08:52:56.649470 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 08:52:56.650214 systemd[1]: Started logrotate.timer. Jul 2 08:52:56.650723 systemd[1]: Started mdadm.timer. Jul 2 08:52:56.651173 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 08:52:56.651659 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 08:52:56.651694 systemd[1]: Reached target paths.target. Jul 2 08:52:56.652112 systemd[1]: Reached target timers.target. Jul 2 08:52:56.652916 systemd[1]: Listening on dbus.socket. Jul 2 08:52:56.654730 systemd[1]: Starting docker.socket... Jul 2 08:52:56.658608 systemd[1]: Listening on sshd.socket. Jul 2 08:52:56.659271 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:52:56.659689 systemd[1]: Listening on docker.socket. Jul 2 08:52:56.660599 systemd[1]: Reached target sockets.target. Jul 2 08:52:56.661157 systemd[1]: Reached target basic.target. Jul 2 08:52:56.661719 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 08:52:56.661749 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 08:52:56.662971 systemd[1]: Starting containerd.service... Jul 2 08:52:56.664744 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 2 08:52:57.235304 systemd-timesyncd[1088]: Contacted time server 51.255.95.80:123 (0.flatcar.pool.ntp.org). Jul 2 08:52:57.235474 systemd-timesyncd[1088]: Initial clock synchronization to Tue 2024-07-02 08:52:57.235223 UTC. Jul 2 08:52:57.235648 systemd[1]: Starting dbus.service... Jul 2 08:52:57.236718 systemd-resolved[1087]: Clock change detected. Flushing caches. Jul 2 08:52:57.237304 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 08:52:57.239111 systemd[1]: Starting extend-filesystems.service... Jul 2 08:52:57.239678 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 08:52:57.242390 systemd[1]: Starting kubelet.service... Jul 2 08:52:57.248082 systemd[1]: Starting motdgen.service... Jul 2 08:52:57.256970 jq[1129]: false Jul 2 08:52:57.252403 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 08:52:57.258261 systemd[1]: Starting sshd-keygen.service... Jul 2 08:52:57.265050 systemd[1]: Starting systemd-logind.service... Jul 2 08:52:57.266218 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 08:52:57.266305 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 08:52:57.266760 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 08:52:57.267832 systemd[1]: Starting update-engine.service... Jul 2 08:52:57.269815 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 08:52:57.275410 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 08:52:57.275624 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 08:52:57.305402 jq[1143]: true Jul 2 08:52:57.312611 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 08:52:57.312821 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 08:52:57.314695 extend-filesystems[1130]: Found loop1 Jul 2 08:52:57.319658 extend-filesystems[1130]: Found vda Jul 2 08:52:57.320524 extend-filesystems[1130]: Found vda1 Jul 2 08:52:57.321236 extend-filesystems[1130]: Found vda2 Jul 2 08:52:57.321820 extend-filesystems[1130]: Found vda3 Jul 2 08:52:57.325589 extend-filesystems[1130]: Found usr Jul 2 08:52:57.329117 extend-filesystems[1130]: Found vda4 Jul 2 08:52:57.329117 extend-filesystems[1130]: Found vda6 Jul 2 08:52:57.329117 extend-filesystems[1130]: Found vda7 Jul 2 08:52:57.329117 extend-filesystems[1130]: Found vda9 Jul 2 08:52:57.329117 extend-filesystems[1130]: Checking size of /dev/vda9 Jul 2 08:52:57.336932 jq[1154]: true Jul 2 08:52:57.347929 dbus-daemon[1126]: [system] SELinux support is enabled Jul 2 08:52:57.348093 systemd[1]: Started dbus.service. Jul 2 08:52:57.350758 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 08:52:57.350798 systemd[1]: Reached target system-config.target. Jul 2 08:52:57.351583 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 08:52:57.351603 systemd[1]: Reached target user-config.target. Jul 2 08:52:57.358741 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 08:52:57.358944 systemd[1]: Finished motdgen.service. Jul 2 08:52:57.382111 extend-filesystems[1130]: Resized partition /dev/vda9 Jul 2 08:52:57.401912 extend-filesystems[1175]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 08:52:57.423337 env[1151]: time="2024-07-02T08:52:57.423236380Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 08:52:57.440867 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Jul 2 08:52:57.457973 systemd-logind[1136]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 08:52:57.495403 update_engine[1138]: I0702 08:52:57.464071 1138 main.cc:92] Flatcar Update Engine starting Jul 2 08:52:57.495403 update_engine[1138]: I0702 08:52:57.469691 1138 update_check_scheduler.cc:74] Next update check in 4m43s Jul 2 08:52:57.495630 env[1151]: time="2024-07-02T08:52:57.491221797Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 08:52:57.458006 systemd-logind[1136]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 08:52:57.458206 systemd-logind[1136]: New seat seat0. Jul 2 08:52:57.469534 systemd[1]: Started update-engine.service. Jul 2 08:52:57.477645 systemd[1]: Started locksmithd.service. Jul 2 08:52:57.478874 systemd[1]: Started systemd-logind.service. Jul 2 08:52:57.499605 env[1151]: time="2024-07-02T08:52:57.498826680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:52:57.500552 env[1151]: time="2024-07-02T08:52:57.500499187Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:52:57.500552 env[1151]: time="2024-07-02T08:52:57.500541526Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:52:57.500793 env[1151]: time="2024-07-02T08:52:57.500755107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:52:57.500793 env[1151]: time="2024-07-02T08:52:57.500784171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 08:52:57.500904 env[1151]: time="2024-07-02T08:52:57.500801183Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 08:52:57.500904 env[1151]: time="2024-07-02T08:52:57.500814067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 08:52:57.501006 env[1151]: time="2024-07-02T08:52:57.500965191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:52:57.501266 env[1151]: time="2024-07-02T08:52:57.501231239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:52:57.501393 env[1151]: time="2024-07-02T08:52:57.501364249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:52:57.501393 env[1151]: time="2024-07-02T08:52:57.501390598Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 08:52:57.501469 env[1151]: time="2024-07-02T08:52:57.501446042Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 08:52:57.501469 env[1151]: time="2024-07-02T08:52:57.501461622Z" level=info msg="metadata content store policy set" policy=shared Jul 2 08:52:57.527919 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Jul 2 08:52:57.618777 extend-filesystems[1175]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 08:52:57.618777 extend-filesystems[1175]: old_desc_blocks = 1, new_desc_blocks = 3 Jul 2 08:52:57.618777 extend-filesystems[1175]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Jul 2 08:52:57.635489 extend-filesystems[1130]: Resized filesystem in /dev/vda9 Jul 2 08:52:57.640580 bash[1179]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:52:57.619192 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 08:52:57.640990 env[1151]: time="2024-07-02T08:52:57.620635582Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 08:52:57.640990 env[1151]: time="2024-07-02T08:52:57.620690676Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 08:52:57.640990 env[1151]: time="2024-07-02T08:52:57.620709701Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 08:52:57.640990 env[1151]: time="2024-07-02T08:52:57.620763422Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 08:52:57.640990 env[1151]: time="2024-07-02T08:52:57.621294959Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 08:52:57.640990 env[1151]: time="2024-07-02T08:52:57.621319535Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 08:52:57.640990 env[1151]: time="2024-07-02T08:52:57.621335224Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 08:52:57.640990 env[1151]: time="2024-07-02T08:52:57.621362015Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 08:52:57.640990 env[1151]: time="2024-07-02T08:52:57.621379237Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 08:52:57.640990 env[1151]: time="2024-07-02T08:52:57.621395027Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 08:52:57.640990 env[1151]: time="2024-07-02T08:52:57.621409374Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 08:52:57.640990 env[1151]: time="2024-07-02T08:52:57.621425123Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 08:52:57.640990 env[1151]: time="2024-07-02T08:52:57.621557371Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 08:52:57.640990 env[1151]: time="2024-07-02T08:52:57.621648342Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 08:52:57.619393 systemd[1]: Finished extend-filesystems.service. Jul 2 08:52:57.642247 env[1151]: time="2024-07-02T08:52:57.622579147Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 08:52:57.642247 env[1151]: time="2024-07-02T08:52:57.622647315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 08:52:57.642247 env[1151]: time="2024-07-02T08:52:57.622664498Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 08:52:57.642247 env[1151]: time="2024-07-02T08:52:57.622722416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 08:52:57.642247 env[1151]: time="2024-07-02T08:52:57.622745059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 08:52:57.642247 env[1151]: time="2024-07-02T08:52:57.622759105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 08:52:57.642247 env[1151]: time="2024-07-02T08:52:57.622821993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 08:52:57.642247 env[1151]: time="2024-07-02T08:52:57.622851198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 08:52:57.642247 env[1151]: time="2024-07-02T08:52:57.622866226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 08:52:57.642247 env[1151]: time="2024-07-02T08:52:57.622892726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 08:52:57.642247 env[1151]: time="2024-07-02T08:52:57.622906241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 08:52:57.642247 env[1151]: time="2024-07-02T08:52:57.622921870Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 08:52:57.642247 env[1151]: time="2024-07-02T08:52:57.623053156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 08:52:57.642247 env[1151]: time="2024-07-02T08:52:57.623075879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 08:52:57.642247 env[1151]: time="2024-07-02T08:52:57.623090356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 08:52:57.624674 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 08:52:57.643159 env[1151]: time="2024-07-02T08:52:57.623103711Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 08:52:57.643159 env[1151]: time="2024-07-02T08:52:57.623119361Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 08:52:57.643159 env[1151]: time="2024-07-02T08:52:57.623131834Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 08:52:57.643159 env[1151]: time="2024-07-02T08:52:57.623152473Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 08:52:57.643159 env[1151]: time="2024-07-02T08:52:57.623189863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 08:52:57.630578 systemd[1]: Started containerd.service. Jul 2 08:52:57.643572 env[1151]: time="2024-07-02T08:52:57.623412210Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 08:52:57.643572 env[1151]: time="2024-07-02T08:52:57.623489024Z" level=info msg="Connect containerd service" Jul 2 08:52:57.643572 env[1151]: time="2024-07-02T08:52:57.623520884Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 08:52:57.643572 env[1151]: time="2024-07-02T08:52:57.625543738Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:52:57.643572 env[1151]: time="2024-07-02T08:52:57.627187811Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 08:52:57.643572 env[1151]: time="2024-07-02T08:52:57.627238526Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 08:52:57.643572 env[1151]: time="2024-07-02T08:52:57.627396392Z" level=info msg="containerd successfully booted in 0.205366s" Jul 2 08:52:57.643572 env[1151]: time="2024-07-02T08:52:57.633245642Z" level=info msg="Start subscribing containerd event" Jul 2 08:52:57.643572 env[1151]: time="2024-07-02T08:52:57.633297921Z" level=info msg="Start recovering state" Jul 2 08:52:57.643572 env[1151]: time="2024-07-02T08:52:57.633366048Z" level=info msg="Start event monitor" Jul 2 08:52:57.643572 env[1151]: time="2024-07-02T08:52:57.633379333Z" level=info msg="Start snapshots syncer" Jul 2 08:52:57.643572 env[1151]: time="2024-07-02T08:52:57.633390574Z" level=info msg="Start cni network conf syncer for default" Jul 2 08:52:57.643572 env[1151]: time="2024-07-02T08:52:57.633399741Z" level=info msg="Start streaming server" Jul 2 08:52:57.791741 locksmithd[1182]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 08:52:57.962603 sshd_keygen[1152]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 08:52:57.989796 systemd[1]: Finished sshd-keygen.service. Jul 2 08:52:57.991982 systemd[1]: Starting issuegen.service... Jul 2 08:52:58.005777 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 08:52:58.005991 systemd[1]: Finished issuegen.service. Jul 2 08:52:58.008120 systemd[1]: Starting systemd-user-sessions.service... Jul 2 08:52:58.016025 systemd[1]: Finished systemd-user-sessions.service. Jul 2 08:52:58.018183 systemd[1]: Started getty@tty1.service. Jul 2 08:52:58.019867 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 08:52:58.020524 systemd[1]: Reached target getty.target. Jul 2 08:52:59.043293 systemd[1]: Created slice system-sshd.slice. Jul 2 08:52:59.046599 systemd[1]: Started sshd@0-172.24.4.136:22-172.24.4.1:56716.service. Jul 2 08:52:59.101975 systemd[1]: Started kubelet.service. Jul 2 08:53:00.449413 sshd[1205]: Accepted publickey for core from 172.24.4.1 port 56716 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:53:00.455823 sshd[1205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:53:00.500644 systemd-logind[1136]: New session 1 of user core. Jul 2 08:53:00.504962 systemd[1]: Created slice user-500.slice. Jul 2 08:53:00.509315 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 08:53:00.541105 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 08:53:00.546609 systemd[1]: Starting user@500.service... Jul 2 08:53:00.555069 (systemd)[1217]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:53:00.651980 systemd[1217]: Queued start job for default target default.target. Jul 2 08:53:00.652933 systemd[1217]: Reached target paths.target. Jul 2 08:53:00.652974 systemd[1217]: Reached target sockets.target. Jul 2 08:53:00.653002 systemd[1217]: Reached target timers.target. Jul 2 08:53:00.653028 systemd[1217]: Reached target basic.target. Jul 2 08:53:00.653158 systemd[1]: Started user@500.service. Jul 2 08:53:00.654601 systemd[1]: Started session-1.scope. Jul 2 08:53:00.655458 systemd[1217]: Reached target default.target. Jul 2 08:53:00.655603 systemd[1217]: Startup finished in 91ms. Jul 2 08:53:01.207391 kubelet[1208]: E0702 08:53:01.207317 1208 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:53:01.210024 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:53:01.210169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:53:01.210458 systemd[1]: kubelet.service: Consumed 1.962s CPU time. Jul 2 08:53:01.234213 systemd[1]: Started sshd@1-172.24.4.136:22-172.24.4.1:56726.service. Jul 2 08:53:03.103130 sshd[1226]: Accepted publickey for core from 172.24.4.1 port 56726 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:53:03.106412 sshd[1226]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:53:03.117236 systemd-logind[1136]: New session 2 of user core. Jul 2 08:53:03.118196 systemd[1]: Started session-2.scope. Jul 2 08:53:03.874134 sshd[1226]: pam_unix(sshd:session): session closed for user core Jul 2 08:53:03.880552 systemd[1]: sshd@1-172.24.4.136:22-172.24.4.1:56726.service: Deactivated successfully. Jul 2 08:53:03.882372 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 08:53:03.885767 systemd-logind[1136]: Session 2 logged out. Waiting for processes to exit. Jul 2 08:53:03.887567 systemd[1]: Started sshd@2-172.24.4.136:22-172.24.4.1:56734.service. Jul 2 08:53:03.892194 systemd-logind[1136]: Removed session 2. Jul 2 08:53:04.375616 coreos-metadata[1125]: Jul 02 08:53:04.375 WARN failed to locate config-drive, using the metadata service API instead Jul 2 08:53:04.488766 coreos-metadata[1125]: Jul 02 08:53:04.488 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 2 08:53:05.020383 coreos-metadata[1125]: Jul 02 08:53:05.020 INFO Fetch successful Jul 2 08:53:05.020693 coreos-metadata[1125]: Jul 02 08:53:05.020 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 08:53:05.036068 coreos-metadata[1125]: Jul 02 08:53:05.035 INFO Fetch successful Jul 2 08:53:05.038947 unknown[1125]: wrote ssh authorized keys file for user: core Jul 2 08:53:05.078911 update-ssh-keys[1236]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:53:05.079879 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 2 08:53:05.080815 systemd[1]: Reached target multi-user.target. Jul 2 08:53:05.083729 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 08:53:05.100239 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 08:53:05.100580 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 08:53:05.101667 systemd[1]: Startup finished in 1.006s (kernel) + 7.551s (initrd) + 15.191s (userspace) = 23.749s. Jul 2 08:53:05.109958 sshd[1232]: Accepted publickey for core from 172.24.4.1 port 56734 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:53:05.112372 sshd[1232]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:53:05.126530 systemd-logind[1136]: New session 3 of user core. Jul 2 08:53:05.130217 systemd[1]: Started session-3.scope. Jul 2 08:53:05.897796 sshd[1232]: pam_unix(sshd:session): session closed for user core Jul 2 08:53:05.903685 systemd[1]: sshd@2-172.24.4.136:22-172.24.4.1:56734.service: Deactivated successfully. Jul 2 08:53:05.905211 systemd-logind[1136]: Session 3 logged out. Waiting for processes to exit. Jul 2 08:53:05.905252 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 08:53:05.907518 systemd-logind[1136]: Removed session 3. Jul 2 08:53:11.217049 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 08:53:11.218190 systemd[1]: Stopped kubelet.service. Jul 2 08:53:11.218476 systemd[1]: kubelet.service: Consumed 1.962s CPU time. Jul 2 08:53:11.221121 systemd[1]: Starting kubelet.service... Jul 2 08:53:11.360264 systemd[1]: Started kubelet.service. Jul 2 08:53:11.846066 kubelet[1245]: E0702 08:53:11.845959 1245 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:53:11.850594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:53:11.850787 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:53:15.908183 systemd[1]: Started sshd@3-172.24.4.136:22-172.24.4.1:59952.service. Jul 2 08:53:17.161040 sshd[1253]: Accepted publickey for core from 172.24.4.1 port 59952 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:53:17.163800 sshd[1253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:53:17.174591 systemd-logind[1136]: New session 4 of user core. Jul 2 08:53:17.174770 systemd[1]: Started session-4.scope. Jul 2 08:53:17.931802 sshd[1253]: pam_unix(sshd:session): session closed for user core Jul 2 08:53:17.941552 systemd[1]: Started sshd@4-172.24.4.136:22-172.24.4.1:59958.service. Jul 2 08:53:17.942960 systemd[1]: sshd@3-172.24.4.136:22-172.24.4.1:59952.service: Deactivated successfully. Jul 2 08:53:17.944473 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 08:53:17.948408 systemd-logind[1136]: Session 4 logged out. Waiting for processes to exit. Jul 2 08:53:17.950811 systemd-logind[1136]: Removed session 4. Jul 2 08:53:19.456637 sshd[1258]: Accepted publickey for core from 172.24.4.1 port 59958 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:53:19.460205 sshd[1258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:53:19.480275 systemd-logind[1136]: New session 5 of user core. Jul 2 08:53:19.481157 systemd[1]: Started session-5.scope. Jul 2 08:53:20.093656 sshd[1258]: pam_unix(sshd:session): session closed for user core Jul 2 08:53:20.100430 systemd[1]: Started sshd@5-172.24.4.136:22-172.24.4.1:59974.service. Jul 2 08:53:20.104560 systemd[1]: sshd@4-172.24.4.136:22-172.24.4.1:59958.service: Deactivated successfully. Jul 2 08:53:20.107352 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 08:53:20.110087 systemd-logind[1136]: Session 5 logged out. Waiting for processes to exit. Jul 2 08:53:20.112524 systemd-logind[1136]: Removed session 5. Jul 2 08:53:21.397757 sshd[1264]: Accepted publickey for core from 172.24.4.1 port 59974 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:53:21.401211 sshd[1264]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:53:21.412135 systemd-logind[1136]: New session 6 of user core. Jul 2 08:53:21.413093 systemd[1]: Started session-6.scope. Jul 2 08:53:21.880683 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 08:53:21.881159 systemd[1]: Stopped kubelet.service. Jul 2 08:53:21.885251 systemd[1]: Starting kubelet.service... Jul 2 08:53:22.074926 sshd[1264]: pam_unix(sshd:session): session closed for user core Jul 2 08:53:22.085297 systemd[1]: Started sshd@6-172.24.4.136:22-172.24.4.1:59978.service. Jul 2 08:53:22.086594 systemd[1]: sshd@5-172.24.4.136:22-172.24.4.1:59974.service: Deactivated successfully. Jul 2 08:53:22.089251 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 08:53:22.090742 systemd-logind[1136]: Session 6 logged out. Waiting for processes to exit. Jul 2 08:53:22.091799 systemd-logind[1136]: Removed session 6. Jul 2 08:53:22.111824 systemd[1]: Started kubelet.service. Jul 2 08:53:22.190808 kubelet[1276]: E0702 08:53:22.190637 1276 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:53:22.193560 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:53:22.193712 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:53:23.724072 sshd[1272]: Accepted publickey for core from 172.24.4.1 port 59978 ssh2: RSA SHA256:VdsmefeXTJb2AXrBK1NRbWKUCaaQF5AjdY0e7XHYE0Q Jul 2 08:53:23.727176 sshd[1272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:53:23.738106 systemd-logind[1136]: New session 7 of user core. Jul 2 08:53:23.738892 systemd[1]: Started session-7.scope. Jul 2 08:53:24.198345 sudo[1284]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 08:53:24.199657 sudo[1284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:53:24.230241 systemd[1]: Starting coreos-metadata.service... Jul 2 08:53:31.302580 coreos-metadata[1288]: Jul 02 08:53:31.302 WARN failed to locate config-drive, using the metadata service API instead Jul 2 08:53:31.394406 coreos-metadata[1288]: Jul 02 08:53:31.394 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 2 08:53:31.611157 coreos-metadata[1288]: Jul 02 08:53:31.610 INFO Fetch successful Jul 2 08:53:31.611157 coreos-metadata[1288]: Jul 02 08:53:31.610 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 2 08:53:31.627134 coreos-metadata[1288]: Jul 02 08:53:31.626 INFO Fetch successful Jul 2 08:53:31.627402 coreos-metadata[1288]: Jul 02 08:53:31.627 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 2 08:53:31.645556 coreos-metadata[1288]: Jul 02 08:53:31.645 INFO Fetch successful Jul 2 08:53:31.645793 coreos-metadata[1288]: Jul 02 08:53:31.645 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 2 08:53:31.662565 coreos-metadata[1288]: Jul 02 08:53:31.662 INFO Fetch successful Jul 2 08:53:31.662839 coreos-metadata[1288]: Jul 02 08:53:31.662 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 2 08:53:31.678140 coreos-metadata[1288]: Jul 02 08:53:31.677 INFO Fetch successful Jul 2 08:53:31.696411 systemd[1]: Finished coreos-metadata.service. Jul 2 08:53:32.217273 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 08:53:32.217752 systemd[1]: Stopped kubelet.service. Jul 2 08:53:32.222228 systemd[1]: Starting kubelet.service... Jul 2 08:53:32.888706 systemd[1]: Started kubelet.service. Jul 2 08:53:32.986315 kubelet[1311]: E0702 08:53:32.986234 1311 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:53:32.990612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:53:32.990974 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:53:34.916442 systemd[1]: Stopped kubelet.service. Jul 2 08:53:34.923302 systemd[1]: Starting kubelet.service... Jul 2 08:53:34.970685 systemd[1]: Reloading. Jul 2 08:53:35.092214 /usr/lib/systemd/system-generators/torcx-generator[1360]: time="2024-07-02T08:53:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 08:53:35.096033 /usr/lib/systemd/system-generators/torcx-generator[1360]: time="2024-07-02T08:53:35Z" level=info msg="torcx already run" Jul 2 08:53:35.217337 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 08:53:35.217359 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 08:53:35.240649 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:53:35.385749 systemd[1]: Started kubelet.service. Jul 2 08:53:35.391259 systemd[1]: Stopping kubelet.service... Jul 2 08:53:35.392487 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:53:35.392674 systemd[1]: Stopped kubelet.service. Jul 2 08:53:35.394345 systemd[1]: Starting kubelet.service... Jul 2 08:53:35.478448 systemd[1]: Started kubelet.service. Jul 2 08:53:36.278307 kubelet[1418]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:53:36.278307 kubelet[1418]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:53:36.278307 kubelet[1418]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:53:36.279384 kubelet[1418]: I0702 08:53:36.278300 1418 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:53:36.832065 kubelet[1418]: I0702 08:53:36.832009 1418 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 08:53:36.832065 kubelet[1418]: I0702 08:53:36.832045 1418 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:53:36.832510 kubelet[1418]: I0702 08:53:36.832475 1418 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 08:53:36.867598 kubelet[1418]: I0702 08:53:36.867538 1418 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:53:36.888735 kubelet[1418]: I0702 08:53:36.888679 1418 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:53:36.889744 kubelet[1418]: I0702 08:53:36.889710 1418 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:53:36.890490 kubelet[1418]: I0702 08:53:36.890436 1418 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:53:36.892762 kubelet[1418]: I0702 08:53:36.892677 1418 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:53:36.893171 kubelet[1418]: I0702 08:53:36.893115 1418 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:53:36.893736 kubelet[1418]: I0702 08:53:36.893697 1418 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:53:36.894295 kubelet[1418]: I0702 08:53:36.894261 1418 kubelet.go:396] "Attempting to sync node with API server" Jul 2 08:53:36.894502 kubelet[1418]: I0702 08:53:36.894477 1418 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:53:36.894770 kubelet[1418]: I0702 08:53:36.894734 1418 kubelet.go:312] "Adding apiserver pod source" Jul 2 08:53:36.895048 kubelet[1418]: I0702 08:53:36.895001 1418 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:53:36.901056 kubelet[1418]: E0702 08:53:36.901016 1418 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:36.901394 kubelet[1418]: E0702 08:53:36.901358 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:36.906289 kubelet[1418]: I0702 08:53:36.906251 1418 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 08:53:36.916016 kubelet[1418]: I0702 08:53:36.915962 1418 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 08:53:36.921584 kubelet[1418]: W0702 08:53:36.921358 1418 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "172.24.4.136" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 2 08:53:36.921584 kubelet[1418]: E0702 08:53:36.921579 1418 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.136" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 2 08:53:36.922690 kubelet[1418]: W0702 08:53:36.922533 1418 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 2 08:53:36.922690 kubelet[1418]: E0702 08:53:36.922621 1418 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 2 08:53:36.923117 kubelet[1418]: W0702 08:53:36.922937 1418 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 08:53:36.924401 kubelet[1418]: I0702 08:53:36.924352 1418 server.go:1256] "Started kubelet" Jul 2 08:53:36.941242 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 08:53:36.941575 kubelet[1418]: I0702 08:53:36.941526 1418 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:53:36.953154 kubelet[1418]: I0702 08:53:36.953046 1418 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:53:36.955379 kubelet[1418]: I0702 08:53:36.955327 1418 server.go:461] "Adding debug handlers to kubelet server" Jul 2 08:53:36.963508 kubelet[1418]: E0702 08:53:36.963465 1418 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:53:36.964136 kubelet[1418]: I0702 08:53:36.964104 1418 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 08:53:36.964742 kubelet[1418]: I0702 08:53:36.964710 1418 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:53:36.971797 kubelet[1418]: I0702 08:53:36.971663 1418 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:53:36.982977 kubelet[1418]: I0702 08:53:36.972344 1418 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 08:53:36.983937 kubelet[1418]: I0702 08:53:36.983236 1418 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 08:53:36.984211 kubelet[1418]: I0702 08:53:36.984147 1418 factory.go:221] Registration of the systemd container factory successfully Jul 2 08:53:36.984441 kubelet[1418]: I0702 08:53:36.984399 1418 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 08:53:36.988616 kubelet[1418]: I0702 08:53:36.988592 1418 factory.go:221] Registration of the containerd container factory successfully Jul 2 08:53:37.011891 kubelet[1418]: E0702 08:53:37.007484 1418 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.136\" not found" node="172.24.4.136" Jul 2 08:53:37.016310 kubelet[1418]: I0702 08:53:37.016289 1418 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:53:37.016477 kubelet[1418]: I0702 08:53:37.016466 1418 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:53:37.016546 kubelet[1418]: I0702 08:53:37.016537 1418 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:53:37.020726 kubelet[1418]: I0702 08:53:37.020712 1418 policy_none.go:49] "None policy: Start" Jul 2 08:53:37.021606 kubelet[1418]: I0702 08:53:37.021594 1418 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 08:53:37.021704 kubelet[1418]: I0702 08:53:37.021694 1418 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:53:37.032443 systemd[1]: Created slice kubepods.slice. Jul 2 08:53:37.040333 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 08:53:37.053269 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 08:53:37.058564 kubelet[1418]: I0702 08:53:37.058521 1418 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:53:37.058829 kubelet[1418]: I0702 08:53:37.058806 1418 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:53:37.061881 kubelet[1418]: E0702 08:53:37.061708 1418 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.136\" not found" Jul 2 08:53:37.073650 kubelet[1418]: I0702 08:53:37.073593 1418 kubelet_node_status.go:73] "Attempting to register node" node="172.24.4.136" Jul 2 08:53:37.085529 kubelet[1418]: I0702 08:53:37.085433 1418 kubelet_node_status.go:76] "Successfully registered node" node="172.24.4.136" Jul 2 08:53:37.113739 kubelet[1418]: E0702 08:53:37.113682 1418 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.136\" not found" Jul 2 08:53:37.146494 kubelet[1418]: I0702 08:53:37.146404 1418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:53:37.147871 kubelet[1418]: I0702 08:53:37.147789 1418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:53:37.147871 kubelet[1418]: I0702 08:53:37.147832 1418 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:53:37.148020 kubelet[1418]: I0702 08:53:37.147906 1418 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 08:53:37.148020 kubelet[1418]: E0702 08:53:37.147959 1418 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 2 08:53:37.215141 kubelet[1418]: E0702 08:53:37.215076 1418 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.136\" not found" Jul 2 08:53:37.317280 kubelet[1418]: E0702 08:53:37.317187 1418 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.136\" not found" Jul 2 08:53:37.419662 kubelet[1418]: E0702 08:53:37.419419 1418 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.136\" not found" Jul 2 08:53:37.520521 kubelet[1418]: E0702 08:53:37.520459 1418 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.136\" not found" Jul 2 08:53:37.622050 kubelet[1418]: E0702 08:53:37.621978 1418 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.136\" not found" Jul 2 08:53:37.723328 kubelet[1418]: E0702 08:53:37.723235 1418 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.136\" not found" Jul 2 08:53:37.824160 kubelet[1418]: E0702 08:53:37.824089 1418 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.136\" not found" Jul 2 08:53:37.840553 kubelet[1418]: I0702 08:53:37.840487 1418 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 2 08:53:37.841650 kubelet[1418]: W0702 08:53:37.840735 1418 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jul 2 08:53:37.841650 kubelet[1418]: W0702 08:53:37.840800 1418 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jul 2 08:53:37.841650 kubelet[1418]: W0702 08:53:37.840909 1418 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jul 2 08:53:37.902576 kubelet[1418]: E0702 08:53:37.902482 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:37.925361 kubelet[1418]: E0702 08:53:37.925293 1418 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.136\" not found" Jul 2 08:53:38.026418 kubelet[1418]: E0702 08:53:38.026219 1418 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.136\" not found" Jul 2 08:53:38.029726 sudo[1284]: pam_unix(sudo:session): session closed for user root Jul 2 08:53:38.126532 kubelet[1418]: E0702 08:53:38.126465 1418 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.24.4.136\" not found" Jul 2 08:53:38.229240 kubelet[1418]: I0702 08:53:38.229199 1418 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 2 08:53:38.230312 env[1151]: time="2024-07-02T08:53:38.230117454Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 08:53:38.231351 kubelet[1418]: I0702 08:53:38.231282 1418 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 2 08:53:38.324628 sshd[1272]: pam_unix(sshd:session): session closed for user core Jul 2 08:53:38.331752 systemd[1]: sshd@6-172.24.4.136:22-172.24.4.1:59978.service: Deactivated successfully. Jul 2 08:53:38.333336 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 08:53:38.333637 systemd[1]: session-7.scope: Consumed 1.212s CPU time. Jul 2 08:53:38.335087 systemd-logind[1136]: Session 7 logged out. Waiting for processes to exit. Jul 2 08:53:38.338583 systemd-logind[1136]: Removed session 7. Jul 2 08:53:38.903428 kubelet[1418]: I0702 08:53:38.903351 1418 apiserver.go:52] "Watching apiserver" Jul 2 08:53:38.904370 kubelet[1418]: E0702 08:53:38.904306 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:38.914826 kubelet[1418]: I0702 08:53:38.914757 1418 topology_manager.go:215] "Topology Admit Handler" podUID="889aa950-7444-4949-89e6-6576480ffcd9" podNamespace="kube-system" podName="cilium-r9whw" Jul 2 08:53:38.915091 kubelet[1418]: I0702 08:53:38.915037 1418 topology_manager.go:215] "Topology Admit Handler" podUID="2024352b-5f4b-455c-99ff-909b63863250" podNamespace="kube-system" podName="kube-proxy-gl5ht" Jul 2 08:53:38.928773 systemd[1]: Created slice kubepods-besteffort-pod2024352b_5f4b_455c_99ff_909b63863250.slice. Jul 2 08:53:38.947743 systemd[1]: Created slice kubepods-burstable-pod889aa950_7444_4949_89e6_6576480ffcd9.slice. Jul 2 08:53:38.985234 kubelet[1418]: I0702 08:53:38.985142 1418 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 08:53:38.996602 kubelet[1418]: I0702 08:53:38.996535 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-cni-path\") pod \"cilium-r9whw\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " pod="kube-system/cilium-r9whw" Jul 2 08:53:38.996771 kubelet[1418]: I0702 08:53:38.996638 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-lib-modules\") pod \"cilium-r9whw\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " pod="kube-system/cilium-r9whw" Jul 2 08:53:38.996771 kubelet[1418]: I0702 08:53:38.996699 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/889aa950-7444-4949-89e6-6576480ffcd9-clustermesh-secrets\") pod \"cilium-r9whw\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " pod="kube-system/cilium-r9whw" Jul 2 08:53:38.997002 kubelet[1418]: I0702 08:53:38.996773 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-host-proc-sys-kernel\") pod \"cilium-r9whw\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " pod="kube-system/cilium-r9whw" Jul 2 08:53:38.997002 kubelet[1418]: I0702 08:53:38.996875 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2024352b-5f4b-455c-99ff-909b63863250-xtables-lock\") pod \"kube-proxy-gl5ht\" (UID: \"2024352b-5f4b-455c-99ff-909b63863250\") " pod="kube-system/kube-proxy-gl5ht" Jul 2 08:53:38.997002 kubelet[1418]: I0702 08:53:38.996939 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2024352b-5f4b-455c-99ff-909b63863250-lib-modules\") pod \"kube-proxy-gl5ht\" (UID: \"2024352b-5f4b-455c-99ff-909b63863250\") " pod="kube-system/kube-proxy-gl5ht" Jul 2 08:53:38.997220 kubelet[1418]: I0702 08:53:38.997030 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxlr9\" (UniqueName: \"kubernetes.io/projected/2024352b-5f4b-455c-99ff-909b63863250-kube-api-access-vxlr9\") pod \"kube-proxy-gl5ht\" (UID: \"2024352b-5f4b-455c-99ff-909b63863250\") " pod="kube-system/kube-proxy-gl5ht" Jul 2 08:53:38.997220 kubelet[1418]: I0702 08:53:38.997084 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-bpf-maps\") pod \"cilium-r9whw\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " pod="kube-system/cilium-r9whw" Jul 2 08:53:38.997220 kubelet[1418]: I0702 08:53:38.997138 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-hostproc\") pod \"cilium-r9whw\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " pod="kube-system/cilium-r9whw" Jul 2 08:53:38.997220 kubelet[1418]: I0702 08:53:38.997196 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-cilium-cgroup\") pod \"cilium-r9whw\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " pod="kube-system/cilium-r9whw" Jul 2 08:53:38.997463 kubelet[1418]: I0702 08:53:38.997252 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/889aa950-7444-4949-89e6-6576480ffcd9-cilium-config-path\") pod \"cilium-r9whw\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " pod="kube-system/cilium-r9whw" Jul 2 08:53:38.997463 kubelet[1418]: I0702 08:53:38.997360 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-host-proc-sys-net\") pod \"cilium-r9whw\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " pod="kube-system/cilium-r9whw" Jul 2 08:53:38.997598 kubelet[1418]: I0702 08:53:38.997463 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq2kk\" (UniqueName: \"kubernetes.io/projected/889aa950-7444-4949-89e6-6576480ffcd9-kube-api-access-wq2kk\") pod \"cilium-r9whw\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " pod="kube-system/cilium-r9whw" Jul 2 08:53:38.997598 kubelet[1418]: I0702 08:53:38.997537 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-etc-cni-netd\") pod \"cilium-r9whw\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " pod="kube-system/cilium-r9whw" Jul 2 08:53:38.997598 kubelet[1418]: I0702 08:53:38.997590 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-xtables-lock\") pod \"cilium-r9whw\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " pod="kube-system/cilium-r9whw" Jul 2 08:53:38.997783 kubelet[1418]: I0702 08:53:38.997657 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-cilium-run\") pod \"cilium-r9whw\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " pod="kube-system/cilium-r9whw" Jul 2 08:53:38.997783 kubelet[1418]: I0702 08:53:38.997718 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/889aa950-7444-4949-89e6-6576480ffcd9-hubble-tls\") pod \"cilium-r9whw\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " pod="kube-system/cilium-r9whw" Jul 2 08:53:38.997957 kubelet[1418]: I0702 08:53:38.997801 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2024352b-5f4b-455c-99ff-909b63863250-kube-proxy\") pod \"kube-proxy-gl5ht\" (UID: \"2024352b-5f4b-455c-99ff-909b63863250\") " pod="kube-system/kube-proxy-gl5ht" Jul 2 08:53:39.241939 env[1151]: time="2024-07-02T08:53:39.241460623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gl5ht,Uid:2024352b-5f4b-455c-99ff-909b63863250,Namespace:kube-system,Attempt:0,}" Jul 2 08:53:39.262657 env[1151]: time="2024-07-02T08:53:39.262556825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r9whw,Uid:889aa950-7444-4949-89e6-6576480ffcd9,Namespace:kube-system,Attempt:0,}" Jul 2 08:53:39.904877 kubelet[1418]: E0702 08:53:39.904765 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:40.063434 env[1151]: time="2024-07-02T08:53:40.063348766Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:53:40.066786 env[1151]: time="2024-07-02T08:53:40.066726142Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:53:40.070633 env[1151]: time="2024-07-02T08:53:40.070543959Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:53:40.074186 env[1151]: time="2024-07-02T08:53:40.074127263Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:53:40.079328 env[1151]: time="2024-07-02T08:53:40.079134045Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:53:40.086978 env[1151]: time="2024-07-02T08:53:40.086921265Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:53:40.090002 env[1151]: time="2024-07-02T08:53:40.089823654Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:53:40.101329 env[1151]: time="2024-07-02T08:53:40.101244293Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:53:40.108784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1270828298.mount: Deactivated successfully. Jul 2 08:53:40.172399 env[1151]: time="2024-07-02T08:53:40.172144072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:53:40.172730 env[1151]: time="2024-07-02T08:53:40.172658474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:53:40.173143 env[1151]: time="2024-07-02T08:53:40.173060222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:53:40.174366 env[1151]: time="2024-07-02T08:53:40.173955252Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561 pid=1476 runtime=io.containerd.runc.v2 Jul 2 08:53:40.179036 env[1151]: time="2024-07-02T08:53:40.178969247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:53:40.179134 env[1151]: time="2024-07-02T08:53:40.179039379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:53:40.179134 env[1151]: time="2024-07-02T08:53:40.179057684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:53:40.179266 env[1151]: time="2024-07-02T08:53:40.179233215Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cc295cfaf57aca6e0acaa6fc9fafe11090036f17bd65ab106fe4f5b463c2a4d pid=1478 runtime=io.containerd.runc.v2 Jul 2 08:53:40.197669 systemd[1]: Started cri-containerd-39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561.scope. Jul 2 08:53:40.222468 systemd[1]: Started cri-containerd-1cc295cfaf57aca6e0acaa6fc9fafe11090036f17bd65ab106fe4f5b463c2a4d.scope. Jul 2 08:53:40.260646 env[1151]: time="2024-07-02T08:53:40.260585216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r9whw,Uid:889aa950-7444-4949-89e6-6576480ffcd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561\"" Jul 2 08:53:40.269349 env[1151]: time="2024-07-02T08:53:40.269292633Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 08:53:40.272172 env[1151]: time="2024-07-02T08:53:40.272127004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gl5ht,Uid:2024352b-5f4b-455c-99ff-909b63863250,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cc295cfaf57aca6e0acaa6fc9fafe11090036f17bd65ab106fe4f5b463c2a4d\"" Jul 2 08:53:40.905647 kubelet[1418]: E0702 08:53:40.905532 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:41.906725 kubelet[1418]: E0702 08:53:41.906654 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:42.353997 update_engine[1138]: I0702 08:53:42.353916 1138 update_attempter.cc:509] Updating boot flags... Jul 2 08:53:42.907004 kubelet[1418]: E0702 08:53:42.906915 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:43.907801 kubelet[1418]: E0702 08:53:43.907713 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:44.909038 kubelet[1418]: E0702 08:53:44.908875 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:45.910082 kubelet[1418]: E0702 08:53:45.909987 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:46.910651 kubelet[1418]: E0702 08:53:46.910549 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:47.271700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2735519690.mount: Deactivated successfully. Jul 2 08:53:47.911212 kubelet[1418]: E0702 08:53:47.911102 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:48.911713 kubelet[1418]: E0702 08:53:48.911584 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:49.912500 kubelet[1418]: E0702 08:53:49.912325 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:50.913326 kubelet[1418]: E0702 08:53:50.913256 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:51.604940 env[1151]: time="2024-07-02T08:53:51.604809367Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:53:51.608602 env[1151]: time="2024-07-02T08:53:51.608523144Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:53:51.612455 env[1151]: time="2024-07-02T08:53:51.612370152Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:53:51.614381 env[1151]: time="2024-07-02T08:53:51.614307017Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 08:53:51.617913 env[1151]: time="2024-07-02T08:53:51.617817552Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 08:53:51.620412 env[1151]: time="2024-07-02T08:53:51.620160630Z" level=info msg="CreateContainer within sandbox \"39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:53:51.655809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1765008143.mount: Deactivated successfully. Jul 2 08:53:51.673700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4015136810.mount: Deactivated successfully. Jul 2 08:53:51.687352 env[1151]: time="2024-07-02T08:53:51.687306346Z" level=info msg="CreateContainer within sandbox \"39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5f33010776dd6ef368c3dbbb7027f3fcb91184520d53350a7c1b2b898d3de7c3\"" Jul 2 08:53:51.688401 env[1151]: time="2024-07-02T08:53:51.688377471Z" level=info msg="StartContainer for \"5f33010776dd6ef368c3dbbb7027f3fcb91184520d53350a7c1b2b898d3de7c3\"" Jul 2 08:53:51.729410 systemd[1]: Started cri-containerd-5f33010776dd6ef368c3dbbb7027f3fcb91184520d53350a7c1b2b898d3de7c3.scope. Jul 2 08:53:51.776552 env[1151]: time="2024-07-02T08:53:51.774929741Z" level=info msg="StartContainer for \"5f33010776dd6ef368c3dbbb7027f3fcb91184520d53350a7c1b2b898d3de7c3\" returns successfully" Jul 2 08:53:51.782999 systemd[1]: cri-containerd-5f33010776dd6ef368c3dbbb7027f3fcb91184520d53350a7c1b2b898d3de7c3.scope: Deactivated successfully. Jul 2 08:53:51.996873 kubelet[1418]: E0702 08:53:51.913675 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:52.591266 env[1151]: time="2024-07-02T08:53:52.591149908Z" level=info msg="shim disconnected" id=5f33010776dd6ef368c3dbbb7027f3fcb91184520d53350a7c1b2b898d3de7c3 Jul 2 08:53:52.591568 env[1151]: time="2024-07-02T08:53:52.591269664Z" level=warning msg="cleaning up after shim disconnected" id=5f33010776dd6ef368c3dbbb7027f3fcb91184520d53350a7c1b2b898d3de7c3 namespace=k8s.io Jul 2 08:53:52.591568 env[1151]: time="2024-07-02T08:53:52.591298427Z" level=info msg="cleaning up dead shim" Jul 2 08:53:52.619083 env[1151]: time="2024-07-02T08:53:52.618950615Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:53:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1609 runtime=io.containerd.runc.v2\n" Jul 2 08:53:52.646733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f33010776dd6ef368c3dbbb7027f3fcb91184520d53350a7c1b2b898d3de7c3-rootfs.mount: Deactivated successfully. Jul 2 08:53:52.915574 kubelet[1418]: E0702 08:53:52.914616 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:53.366928 env[1151]: time="2024-07-02T08:53:53.359616162Z" level=info msg="CreateContainer within sandbox \"39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:53:53.418995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3777456715.mount: Deactivated successfully. Jul 2 08:53:53.437552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1133073447.mount: Deactivated successfully. Jul 2 08:53:53.494431 env[1151]: time="2024-07-02T08:53:53.494376817Z" level=info msg="CreateContainer within sandbox \"39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"87c3ecedbadeb22fa72bfb6e3d3b4943d0e90f1758015996984f58ed5e08ae00\"" Jul 2 08:53:53.510466 env[1151]: time="2024-07-02T08:53:53.510419220Z" level=info msg="StartContainer for \"87c3ecedbadeb22fa72bfb6e3d3b4943d0e90f1758015996984f58ed5e08ae00\"" Jul 2 08:53:53.542549 systemd[1]: Started cri-containerd-87c3ecedbadeb22fa72bfb6e3d3b4943d0e90f1758015996984f58ed5e08ae00.scope. Jul 2 08:53:53.591272 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:53:53.591534 systemd[1]: Stopped systemd-sysctl.service. Jul 2 08:53:53.591727 systemd[1]: Stopping systemd-sysctl.service... Jul 2 08:53:53.593427 systemd[1]: Starting systemd-sysctl.service... Jul 2 08:53:53.593693 systemd[1]: cri-containerd-87c3ecedbadeb22fa72bfb6e3d3b4943d0e90f1758015996984f58ed5e08ae00.scope: Deactivated successfully. Jul 2 08:53:53.606037 systemd[1]: Finished systemd-sysctl.service. Jul 2 08:53:53.611869 env[1151]: time="2024-07-02T08:53:53.610799714Z" level=info msg="StartContainer for \"87c3ecedbadeb22fa72bfb6e3d3b4943d0e90f1758015996984f58ed5e08ae00\" returns successfully" Jul 2 08:53:53.693999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87c3ecedbadeb22fa72bfb6e3d3b4943d0e90f1758015996984f58ed5e08ae00-rootfs.mount: Deactivated successfully. Jul 2 08:53:53.859335 env[1151]: time="2024-07-02T08:53:53.859156159Z" level=info msg="shim disconnected" id=87c3ecedbadeb22fa72bfb6e3d3b4943d0e90f1758015996984f58ed5e08ae00 Jul 2 08:53:53.860212 env[1151]: time="2024-07-02T08:53:53.860164416Z" level=warning msg="cleaning up after shim disconnected" id=87c3ecedbadeb22fa72bfb6e3d3b4943d0e90f1758015996984f58ed5e08ae00 namespace=k8s.io Jul 2 08:53:53.860430 env[1151]: time="2024-07-02T08:53:53.860394880Z" level=info msg="cleaning up dead shim" Jul 2 08:53:53.896354 env[1151]: time="2024-07-02T08:53:53.896235258Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:53:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1674 runtime=io.containerd.runc.v2\n" Jul 2 08:53:53.915759 kubelet[1418]: E0702 08:53:53.915646 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:54.293859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1005147873.mount: Deactivated successfully. Jul 2 08:53:54.341133 env[1151]: time="2024-07-02T08:53:54.341009399Z" level=info msg="CreateContainer within sandbox \"39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:53:54.381109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3343857709.mount: Deactivated successfully. Jul 2 08:53:54.398653 env[1151]: time="2024-07-02T08:53:54.398531347Z" level=info msg="CreateContainer within sandbox \"39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"05ebe6aa7c339ed368455996392b054ec87297f9675cda604c28c37dda15e406\"" Jul 2 08:53:54.400513 env[1151]: time="2024-07-02T08:53:54.400403297Z" level=info msg="StartContainer for \"05ebe6aa7c339ed368455996392b054ec87297f9675cda604c28c37dda15e406\"" Jul 2 08:53:54.428213 systemd[1]: Started cri-containerd-05ebe6aa7c339ed368455996392b054ec87297f9675cda604c28c37dda15e406.scope. Jul 2 08:53:54.471410 systemd[1]: cri-containerd-05ebe6aa7c339ed368455996392b054ec87297f9675cda604c28c37dda15e406.scope: Deactivated successfully. Jul 2 08:53:54.473377 env[1151]: time="2024-07-02T08:53:54.473235395Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod889aa950_7444_4949_89e6_6576480ffcd9.slice/cri-containerd-05ebe6aa7c339ed368455996392b054ec87297f9675cda604c28c37dda15e406.scope/memory.events\": no such file or directory" Jul 2 08:53:54.478355 env[1151]: time="2024-07-02T08:53:54.478323062Z" level=info msg="StartContainer for \"05ebe6aa7c339ed368455996392b054ec87297f9675cda604c28c37dda15e406\" returns successfully" Jul 2 08:53:54.769948 env[1151]: time="2024-07-02T08:53:54.769819939Z" level=info msg="shim disconnected" id=05ebe6aa7c339ed368455996392b054ec87297f9675cda604c28c37dda15e406 Jul 2 08:53:54.770556 env[1151]: time="2024-07-02T08:53:54.770485150Z" level=warning msg="cleaning up after shim disconnected" id=05ebe6aa7c339ed368455996392b054ec87297f9675cda604c28c37dda15e406 namespace=k8s.io Jul 2 08:53:54.770739 env[1151]: time="2024-07-02T08:53:54.770703531Z" level=info msg="cleaning up dead shim" Jul 2 08:53:54.807981 env[1151]: time="2024-07-02T08:53:54.807901649Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:53:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1731 runtime=io.containerd.runc.v2\n" Jul 2 08:53:54.916975 kubelet[1418]: E0702 08:53:54.916812 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:55.347466 env[1151]: time="2024-07-02T08:53:55.347383544Z" level=info msg="CreateContainer within sandbox \"39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:53:55.378299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4160666302.mount: Deactivated successfully. Jul 2 08:53:55.382988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2903826546.mount: Deactivated successfully. Jul 2 08:53:55.391032 env[1151]: time="2024-07-02T08:53:55.390921903Z" level=info msg="CreateContainer within sandbox \"39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773\"" Jul 2 08:53:55.392635 env[1151]: time="2024-07-02T08:53:55.392564751Z" level=info msg="StartContainer for \"14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773\"" Jul 2 08:53:55.396316 env[1151]: time="2024-07-02T08:53:55.396233108Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:53:55.403494 env[1151]: time="2024-07-02T08:53:55.403401486Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:53:55.407889 env[1151]: time="2024-07-02T08:53:55.407774387Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:53:55.415194 env[1151]: time="2024-07-02T08:53:55.415126410Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:53:55.415775 env[1151]: time="2024-07-02T08:53:55.415699257Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jul 2 08:53:55.420340 env[1151]: time="2024-07-02T08:53:55.420271864Z" level=info msg="CreateContainer within sandbox \"1cc295cfaf57aca6e0acaa6fc9fafe11090036f17bd65ab106fe4f5b463c2a4d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 08:53:55.452375 systemd[1]: Started cri-containerd-14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773.scope. Jul 2 08:53:55.462090 env[1151]: time="2024-07-02T08:53:55.462040386Z" level=info msg="CreateContainer within sandbox \"1cc295cfaf57aca6e0acaa6fc9fafe11090036f17bd65ab106fe4f5b463c2a4d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"22e3566a988ff3e333bd5f7a988fa8454e1a701beb946eaba122e26a2b8633f6\"" Jul 2 08:53:55.463100 env[1151]: time="2024-07-02T08:53:55.462906574Z" level=info msg="StartContainer for \"22e3566a988ff3e333bd5f7a988fa8454e1a701beb946eaba122e26a2b8633f6\"" Jul 2 08:53:55.495363 systemd[1]: Started cri-containerd-22e3566a988ff3e333bd5f7a988fa8454e1a701beb946eaba122e26a2b8633f6.scope. Jul 2 08:53:55.496601 systemd[1]: cri-containerd-14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773.scope: Deactivated successfully. Jul 2 08:53:55.509099 env[1151]: time="2024-07-02T08:53:55.505763110Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod889aa950_7444_4949_89e6_6576480ffcd9.slice/cri-containerd-14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773.scope/memory.events\": no such file or directory" Jul 2 08:53:55.517670 env[1151]: time="2024-07-02T08:53:55.517087552Z" level=info msg="StartContainer for \"14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773\" returns successfully" Jul 2 08:53:55.645362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773-rootfs.mount: Deactivated successfully. Jul 2 08:53:55.771210 env[1151]: time="2024-07-02T08:53:55.771128317Z" level=info msg="StartContainer for \"22e3566a988ff3e333bd5f7a988fa8454e1a701beb946eaba122e26a2b8633f6\" returns successfully" Jul 2 08:53:55.786951 env[1151]: time="2024-07-02T08:53:55.786818808Z" level=info msg="shim disconnected" id=14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773 Jul 2 08:53:55.787395 env[1151]: time="2024-07-02T08:53:55.787328045Z" level=warning msg="cleaning up after shim disconnected" id=14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773 namespace=k8s.io Jul 2 08:53:55.787565 env[1151]: time="2024-07-02T08:53:55.787531407Z" level=info msg="cleaning up dead shim" Jul 2 08:53:55.804959 env[1151]: time="2024-07-02T08:53:55.804881678Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:53:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1876 runtime=io.containerd.runc.v2\n" Jul 2 08:53:55.918941 kubelet[1418]: E0702 08:53:55.917649 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:56.365666 env[1151]: time="2024-07-02T08:53:56.365491278Z" level=info msg="CreateContainer within sandbox \"39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:53:56.413426 kubelet[1418]: I0702 08:53:56.413229 1418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gl5ht" podStartSLOduration=4.270584712 podStartE2EDuration="19.413031267s" podCreationTimestamp="2024-07-02 08:53:37 +0000 UTC" firstStartedPulling="2024-07-02 08:53:40.273778561 +0000 UTC m=+4.790338962" lastFinishedPulling="2024-07-02 08:53:55.416225075 +0000 UTC m=+19.932785517" observedRunningTime="2024-07-02 08:53:56.370763028 +0000 UTC m=+20.887323489" watchObservedRunningTime="2024-07-02 08:53:56.413031267 +0000 UTC m=+20.929591778" Jul 2 08:53:56.415278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount603188488.mount: Deactivated successfully. Jul 2 08:53:56.426051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1806369245.mount: Deactivated successfully. Jul 2 08:53:56.441605 env[1151]: time="2024-07-02T08:53:56.441314605Z" level=info msg="CreateContainer within sandbox \"39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4\"" Jul 2 08:53:56.444743 env[1151]: time="2024-07-02T08:53:56.444658952Z" level=info msg="StartContainer for \"c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4\"" Jul 2 08:53:56.469981 systemd[1]: Started cri-containerd-c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4.scope. Jul 2 08:53:56.521918 env[1151]: time="2024-07-02T08:53:56.521807460Z" level=info msg="StartContainer for \"c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4\" returns successfully" Jul 2 08:53:56.660930 kubelet[1418]: I0702 08:53:56.657129 1418 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 08:53:56.895229 kubelet[1418]: E0702 08:53:56.895141 1418 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:56.919206 kubelet[1418]: E0702 08:53:56.918979 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:57.166887 kernel: Initializing XFRM netlink socket Jul 2 08:53:57.920046 kubelet[1418]: E0702 08:53:57.919912 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:58.922188 kubelet[1418]: E0702 08:53:58.922125 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:53:58.941369 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 08:53:58.938728 systemd-networkd[970]: cilium_host: Link UP Jul 2 08:53:58.944168 systemd-networkd[970]: cilium_net: Link UP Jul 2 08:53:58.944179 systemd-networkd[970]: cilium_net: Gained carrier Jul 2 08:53:58.950070 systemd-networkd[970]: cilium_host: Gained carrier Jul 2 08:53:59.071830 systemd-networkd[970]: cilium_vxlan: Link UP Jul 2 08:53:59.071851 systemd-networkd[970]: cilium_vxlan: Gained carrier Jul 2 08:53:59.289129 systemd-networkd[970]: cilium_net: Gained IPv6LL Jul 2 08:53:59.382049 kernel: NET: Registered PF_ALG protocol family Jul 2 08:53:59.761216 systemd-networkd[970]: cilium_host: Gained IPv6LL Jul 2 08:53:59.922894 kubelet[1418]: E0702 08:53:59.922734 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:00.297041 systemd-networkd[970]: lxc_health: Link UP Jul 2 08:54:00.299866 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 08:54:00.301692 systemd-networkd[970]: lxc_health: Gained carrier Jul 2 08:54:00.923215 kubelet[1418]: E0702 08:54:00.923008 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:01.105183 systemd-networkd[970]: cilium_vxlan: Gained IPv6LL Jul 2 08:54:01.310488 kubelet[1418]: I0702 08:54:01.310261 1418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-r9whw" podStartSLOduration=12.958327449 podStartE2EDuration="24.310107138s" podCreationTimestamp="2024-07-02 08:53:37 +0000 UTC" firstStartedPulling="2024-07-02 08:53:40.263521599 +0000 UTC m=+4.780081990" lastFinishedPulling="2024-07-02 08:53:51.615301238 +0000 UTC m=+16.131861679" observedRunningTime="2024-07-02 08:53:57.424097639 +0000 UTC m=+21.940658111" watchObservedRunningTime="2024-07-02 08:54:01.310107138 +0000 UTC m=+25.826667569" Jul 2 08:54:01.923383 kubelet[1418]: E0702 08:54:01.923284 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:02.321357 systemd-networkd[970]: lxc_health: Gained IPv6LL Jul 2 08:54:02.923818 kubelet[1418]: E0702 08:54:02.923749 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:03.924894 kubelet[1418]: E0702 08:54:03.924809 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:03.939400 kubelet[1418]: I0702 08:54:03.939342 1418 topology_manager.go:215] "Topology Admit Handler" podUID="5e5f5ad0-dd5f-472c-b05f-35f614064a32" podNamespace="default" podName="nginx-deployment-6d5f899847-fk6xx" Jul 2 08:54:03.946511 systemd[1]: Created slice kubepods-besteffort-pod5e5f5ad0_dd5f_472c_b05f_35f614064a32.slice. Jul 2 08:54:03.985748 kubelet[1418]: I0702 08:54:03.985679 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrn8s\" (UniqueName: \"kubernetes.io/projected/5e5f5ad0-dd5f-472c-b05f-35f614064a32-kube-api-access-jrn8s\") pod \"nginx-deployment-6d5f899847-fk6xx\" (UID: \"5e5f5ad0-dd5f-472c-b05f-35f614064a32\") " pod="default/nginx-deployment-6d5f899847-fk6xx" Jul 2 08:54:04.253569 env[1151]: time="2024-07-02T08:54:04.252956243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-fk6xx,Uid:5e5f5ad0-dd5f-472c-b05f-35f614064a32,Namespace:default,Attempt:0,}" Jul 2 08:54:04.342150 systemd-networkd[970]: lxc2243c7d79deb: Link UP Jul 2 08:54:04.357925 kernel: eth0: renamed from tmpc0dc3 Jul 2 08:54:04.363884 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 08:54:04.363987 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2243c7d79deb: link becomes ready Jul 2 08:54:04.364167 systemd-networkd[970]: lxc2243c7d79deb: Gained carrier Jul 2 08:54:04.925503 kubelet[1418]: E0702 08:54:04.925373 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:05.841500 systemd-networkd[970]: lxc2243c7d79deb: Gained IPv6LL Jul 2 08:54:05.926616 kubelet[1418]: E0702 08:54:05.926560 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:06.278944 env[1151]: time="2024-07-02T08:54:06.266298944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:54:06.278944 env[1151]: time="2024-07-02T08:54:06.266340281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:54:06.278944 env[1151]: time="2024-07-02T08:54:06.266355880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:54:06.278944 env[1151]: time="2024-07-02T08:54:06.266480985Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0dc38e61621fb90d5ce4857dfded1a864aa58b5ac3b963c82d8d615fed44ddc pid=2479 runtime=io.containerd.runc.v2 Jul 2 08:54:06.295484 systemd[1]: run-containerd-runc-k8s.io-c0dc38e61621fb90d5ce4857dfded1a864aa58b5ac3b963c82d8d615fed44ddc-runc.SODSUs.mount: Deactivated successfully. Jul 2 08:54:06.298343 systemd[1]: Started cri-containerd-c0dc38e61621fb90d5ce4857dfded1a864aa58b5ac3b963c82d8d615fed44ddc.scope. Jul 2 08:54:06.344595 env[1151]: time="2024-07-02T08:54:06.344544241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-fk6xx,Uid:5e5f5ad0-dd5f-472c-b05f-35f614064a32,Namespace:default,Attempt:0,} returns sandbox id \"c0dc38e61621fb90d5ce4857dfded1a864aa58b5ac3b963c82d8d615fed44ddc\"" Jul 2 08:54:06.346578 env[1151]: time="2024-07-02T08:54:06.346549657Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 08:54:06.928135 kubelet[1418]: E0702 08:54:06.928055 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:07.929133 kubelet[1418]: E0702 08:54:07.929082 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:08.930253 kubelet[1418]: E0702 08:54:08.930209 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:09.931328 kubelet[1418]: E0702 08:54:09.931267 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:10.541003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1564166222.mount: Deactivated successfully. Jul 2 08:54:10.932598 kubelet[1418]: E0702 08:54:10.932475 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:11.933155 kubelet[1418]: E0702 08:54:11.933088 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:12.934280 kubelet[1418]: E0702 08:54:12.934179 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:13.002239 env[1151]: time="2024-07-02T08:54:13.002080661Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:54:13.007767 env[1151]: time="2024-07-02T08:54:13.007630207Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:54:13.018663 env[1151]: time="2024-07-02T08:54:13.018568387Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:54:13.026063 env[1151]: time="2024-07-02T08:54:13.025978917Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:54:13.029831 env[1151]: time="2024-07-02T08:54:13.029758060Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\"" Jul 2 08:54:13.036186 env[1151]: time="2024-07-02T08:54:13.036107408Z" level=info msg="CreateContainer within sandbox \"c0dc38e61621fb90d5ce4857dfded1a864aa58b5ac3b963c82d8d615fed44ddc\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 2 08:54:13.073823 env[1151]: time="2024-07-02T08:54:13.072813834Z" level=info msg="CreateContainer within sandbox \"c0dc38e61621fb90d5ce4857dfded1a864aa58b5ac3b963c82d8d615fed44ddc\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"ba82f9d33ac773f40ff2d85e2b8cd4b3406e59ba365df024db91a5abce2646f3\"" Jul 2 08:54:13.075351 env[1151]: time="2024-07-02T08:54:13.075292106Z" level=info msg="StartContainer for \"ba82f9d33ac773f40ff2d85e2b8cd4b3406e59ba365df024db91a5abce2646f3\"" Jul 2 08:54:13.141882 systemd[1]: Started cri-containerd-ba82f9d33ac773f40ff2d85e2b8cd4b3406e59ba365df024db91a5abce2646f3.scope. Jul 2 08:54:13.189088 env[1151]: time="2024-07-02T08:54:13.188968390Z" level=info msg="StartContainer for \"ba82f9d33ac773f40ff2d85e2b8cd4b3406e59ba365df024db91a5abce2646f3\" returns successfully" Jul 2 08:54:13.461227 kubelet[1418]: I0702 08:54:13.460954 1418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-fk6xx" podStartSLOduration=3.776411908 podStartE2EDuration="10.460810475s" podCreationTimestamp="2024-07-02 08:54:03 +0000 UTC" firstStartedPulling="2024-07-02 08:54:06.346171877 +0000 UTC m=+30.862732278" lastFinishedPulling="2024-07-02 08:54:13.030570394 +0000 UTC m=+37.547130845" observedRunningTime="2024-07-02 08:54:13.460460268 +0000 UTC m=+37.977020709" watchObservedRunningTime="2024-07-02 08:54:13.460810475 +0000 UTC m=+37.977370916" Jul 2 08:54:13.935118 kubelet[1418]: E0702 08:54:13.935044 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:14.936302 kubelet[1418]: E0702 08:54:14.936221 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:15.937533 kubelet[1418]: E0702 08:54:15.937403 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:16.900786 kubelet[1418]: E0702 08:54:16.900555 1418 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:16.938275 kubelet[1418]: E0702 08:54:16.938209 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:17.939885 kubelet[1418]: E0702 08:54:17.939762 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:18.941177 kubelet[1418]: E0702 08:54:18.941013 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:19.941761 kubelet[1418]: E0702 08:54:19.941680 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:20.942144 kubelet[1418]: E0702 08:54:20.942071 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:21.236961 kubelet[1418]: I0702 08:54:21.236811 1418 topology_manager.go:215] "Topology Admit Handler" podUID="32a90836-eb96-47bc-ba62-4ed561ca2351" podNamespace="default" podName="nfs-server-provisioner-0" Jul 2 08:54:21.248500 systemd[1]: Created slice kubepods-besteffort-pod32a90836_eb96_47bc_ba62_4ed561ca2351.slice. Jul 2 08:54:21.321559 kubelet[1418]: I0702 08:54:21.321471 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/32a90836-eb96-47bc-ba62-4ed561ca2351-data\") pod \"nfs-server-provisioner-0\" (UID: \"32a90836-eb96-47bc-ba62-4ed561ca2351\") " pod="default/nfs-server-provisioner-0" Jul 2 08:54:21.321913 kubelet[1418]: I0702 08:54:21.321808 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnv2w\" (UniqueName: \"kubernetes.io/projected/32a90836-eb96-47bc-ba62-4ed561ca2351-kube-api-access-jnv2w\") pod \"nfs-server-provisioner-0\" (UID: \"32a90836-eb96-47bc-ba62-4ed561ca2351\") " pod="default/nfs-server-provisioner-0" Jul 2 08:54:21.558722 env[1151]: time="2024-07-02T08:54:21.558012738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:32a90836-eb96-47bc-ba62-4ed561ca2351,Namespace:default,Attempt:0,}" Jul 2 08:54:21.665729 systemd-networkd[970]: lxcd71e2af9afb3: Link UP Jul 2 08:54:21.678912 kernel: eth0: renamed from tmp82b12 Jul 2 08:54:21.693254 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 08:54:21.693567 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd71e2af9afb3: link becomes ready Jul 2 08:54:21.693936 systemd-networkd[970]: lxcd71e2af9afb3: Gained carrier Jul 2 08:54:21.942725 kubelet[1418]: E0702 08:54:21.942406 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:22.099817 env[1151]: time="2024-07-02T08:54:22.099655863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:54:22.099817 env[1151]: time="2024-07-02T08:54:22.099759629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:54:22.100406 env[1151]: time="2024-07-02T08:54:22.100328977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:54:22.101501 env[1151]: time="2024-07-02T08:54:22.101278789Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/82b12f4efd06a30c16e2651b536d66e75a5465b9988273798849bd787f278e71 pid=2603 runtime=io.containerd.runc.v2 Jul 2 08:54:22.136665 systemd[1]: Started cri-containerd-82b12f4efd06a30c16e2651b536d66e75a5465b9988273798849bd787f278e71.scope. Jul 2 08:54:22.201250 env[1151]: time="2024-07-02T08:54:22.200965623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:32a90836-eb96-47bc-ba62-4ed561ca2351,Namespace:default,Attempt:0,} returns sandbox id \"82b12f4efd06a30c16e2651b536d66e75a5465b9988273798849bd787f278e71\"" Jul 2 08:54:22.204268 env[1151]: time="2024-07-02T08:54:22.204047425Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 2 08:54:22.448969 systemd[1]: run-containerd-runc-k8s.io-82b12f4efd06a30c16e2651b536d66e75a5465b9988273798849bd787f278e71-runc.uiVitP.mount: Deactivated successfully. Jul 2 08:54:22.929398 systemd-networkd[970]: lxcd71e2af9afb3: Gained IPv6LL Jul 2 08:54:22.944584 kubelet[1418]: E0702 08:54:22.944535 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:23.945147 kubelet[1418]: E0702 08:54:23.944913 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:24.945504 kubelet[1418]: E0702 08:54:24.945289 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:25.946183 kubelet[1418]: E0702 08:54:25.946034 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:26.565743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1318472628.mount: Deactivated successfully. Jul 2 08:54:26.947255 kubelet[1418]: E0702 08:54:26.946832 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:27.947993 kubelet[1418]: E0702 08:54:27.947909 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:28.949236 kubelet[1418]: E0702 08:54:28.949171 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:29.942605 env[1151]: time="2024-07-02T08:54:29.942453306Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:54:29.947058 env[1151]: time="2024-07-02T08:54:29.947002590Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:54:29.950090 kubelet[1418]: E0702 08:54:29.949994 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:29.953779 env[1151]: time="2024-07-02T08:54:29.953669658Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:54:29.970049 env[1151]: time="2024-07-02T08:54:29.969970950Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:54:29.971459 env[1151]: time="2024-07-02T08:54:29.971375825Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jul 2 08:54:29.976172 env[1151]: time="2024-07-02T08:54:29.976087524Z" level=info msg="CreateContainer within sandbox \"82b12f4efd06a30c16e2651b536d66e75a5465b9988273798849bd787f278e71\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 2 08:54:29.999836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3369507543.mount: Deactivated successfully. Jul 2 08:54:30.005768 env[1151]: time="2024-07-02T08:54:30.005693926Z" level=info msg="CreateContainer within sandbox \"82b12f4efd06a30c16e2651b536d66e75a5465b9988273798849bd787f278e71\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"6cf55dc177d3134e590a14aebb6f989e0e43a27d627279e0d2e80af390a6e6ce\"" Jul 2 08:54:30.007054 env[1151]: time="2024-07-02T08:54:30.006969209Z" level=info msg="StartContainer for \"6cf55dc177d3134e590a14aebb6f989e0e43a27d627279e0d2e80af390a6e6ce\"" Jul 2 08:54:30.045687 systemd[1]: Started cri-containerd-6cf55dc177d3134e590a14aebb6f989e0e43a27d627279e0d2e80af390a6e6ce.scope. Jul 2 08:54:30.096894 env[1151]: time="2024-07-02T08:54:30.094992028Z" level=info msg="StartContainer for \"6cf55dc177d3134e590a14aebb6f989e0e43a27d627279e0d2e80af390a6e6ce\" returns successfully" Jul 2 08:54:30.588025 kubelet[1418]: I0702 08:54:30.587911 1418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.819290433 podStartE2EDuration="9.587716843s" podCreationTimestamp="2024-07-02 08:54:21 +0000 UTC" firstStartedPulling="2024-07-02 08:54:22.203590598 +0000 UTC m=+46.720150999" lastFinishedPulling="2024-07-02 08:54:29.972017018 +0000 UTC m=+54.488577409" observedRunningTime="2024-07-02 08:54:30.586139875 +0000 UTC m=+55.102700316" watchObservedRunningTime="2024-07-02 08:54:30.587716843 +0000 UTC m=+55.104277294" Jul 2 08:54:30.951352 kubelet[1418]: E0702 08:54:30.951147 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:31.952943 kubelet[1418]: E0702 08:54:31.952720 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:32.953936 kubelet[1418]: E0702 08:54:32.953803 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:33.954464 kubelet[1418]: E0702 08:54:33.954358 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:34.955694 kubelet[1418]: E0702 08:54:34.955559 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:35.956261 kubelet[1418]: E0702 08:54:35.956105 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:36.895894 kubelet[1418]: E0702 08:54:36.895779 1418 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:36.957110 kubelet[1418]: E0702 08:54:36.956961 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:37.958174 kubelet[1418]: E0702 08:54:37.958013 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:38.958665 kubelet[1418]: E0702 08:54:38.958580 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:39.960188 kubelet[1418]: E0702 08:54:39.960115 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:40.030231 kubelet[1418]: I0702 08:54:40.030148 1418 topology_manager.go:215] "Topology Admit Handler" podUID="65c8b11c-64a0-4175-959d-36d9370775b7" podNamespace="default" podName="test-pod-1" Jul 2 08:54:40.043186 systemd[1]: Created slice kubepods-besteffort-pod65c8b11c_64a0_4175_959d_36d9370775b7.slice. Jul 2 08:54:40.170994 kubelet[1418]: I0702 08:54:40.170897 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-847beb08-5196-4595-a222-9080f110cd54\" (UniqueName: \"kubernetes.io/nfs/65c8b11c-64a0-4175-959d-36d9370775b7-pvc-847beb08-5196-4595-a222-9080f110cd54\") pod \"test-pod-1\" (UID: \"65c8b11c-64a0-4175-959d-36d9370775b7\") " pod="default/test-pod-1" Jul 2 08:54:40.171569 kubelet[1418]: I0702 08:54:40.171515 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx4tw\" (UniqueName: \"kubernetes.io/projected/65c8b11c-64a0-4175-959d-36d9370775b7-kube-api-access-jx4tw\") pod \"test-pod-1\" (UID: \"65c8b11c-64a0-4175-959d-36d9370775b7\") " pod="default/test-pod-1" Jul 2 08:54:40.371939 kernel: FS-Cache: Loaded Jul 2 08:54:40.444029 kernel: RPC: Registered named UNIX socket transport module. Jul 2 08:54:40.444232 kernel: RPC: Registered udp transport module. Jul 2 08:54:40.444291 kernel: RPC: Registered tcp transport module. Jul 2 08:54:40.445890 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 2 08:54:40.525913 kernel: FS-Cache: Netfs 'nfs' registered for caching Jul 2 08:54:40.752016 kernel: NFS: Registering the id_resolver key type Jul 2 08:54:40.752258 kernel: Key type id_resolver registered Jul 2 08:54:40.752322 kernel: Key type id_legacy registered Jul 2 08:54:40.820220 nfsidmap[2726]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jul 2 08:54:40.830213 nfsidmap[2727]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'novalocal' Jul 2 08:54:40.951481 env[1151]: time="2024-07-02T08:54:40.950524723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:65c8b11c-64a0-4175-959d-36d9370775b7,Namespace:default,Attempt:0,}" Jul 2 08:54:40.961418 kubelet[1418]: E0702 08:54:40.961324 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:41.017828 systemd-networkd[970]: lxc801e228ac0b0: Link UP Jul 2 08:54:41.031101 kernel: eth0: renamed from tmpf0e77 Jul 2 08:54:41.043929 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 08:54:41.044142 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc801e228ac0b0: link becomes ready Jul 2 08:54:41.044776 systemd-networkd[970]: lxc801e228ac0b0: Gained carrier Jul 2 08:54:41.350053 env[1151]: time="2024-07-02T08:54:41.349512727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:54:41.350053 env[1151]: time="2024-07-02T08:54:41.349609088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:54:41.350053 env[1151]: time="2024-07-02T08:54:41.349643523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:54:41.351276 env[1151]: time="2024-07-02T08:54:41.350365054Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0e779b3bae5ec71195102eaabec4dd24f941d8c30a0e158f4b0eaaf2f9f0add pid=2756 runtime=io.containerd.runc.v2 Jul 2 08:54:41.382733 systemd[1]: Started cri-containerd-f0e779b3bae5ec71195102eaabec4dd24f941d8c30a0e158f4b0eaaf2f9f0add.scope. Jul 2 08:54:41.435108 env[1151]: time="2024-07-02T08:54:41.435043543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:65c8b11c-64a0-4175-959d-36d9370775b7,Namespace:default,Attempt:0,} returns sandbox id \"f0e779b3bae5ec71195102eaabec4dd24f941d8c30a0e158f4b0eaaf2f9f0add\"" Jul 2 08:54:41.437495 env[1151]: time="2024-07-02T08:54:41.437455472Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 08:54:41.953182 env[1151]: time="2024-07-02T08:54:41.953039014Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:54:41.958017 env[1151]: time="2024-07-02T08:54:41.957938228Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:54:41.962337 kubelet[1418]: E0702 08:54:41.962199 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:41.963388 env[1151]: time="2024-07-02T08:54:41.963323547Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:54:41.967538 env[1151]: time="2024-07-02T08:54:41.967467595Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:54:41.969889 env[1151]: time="2024-07-02T08:54:41.969764809Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\"" Jul 2 08:54:41.975491 env[1151]: time="2024-07-02T08:54:41.975391484Z" level=info msg="CreateContainer within sandbox \"f0e779b3bae5ec71195102eaabec4dd24f941d8c30a0e158f4b0eaaf2f9f0add\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 2 08:54:42.014228 env[1151]: time="2024-07-02T08:54:42.014100626Z" level=info msg="CreateContainer within sandbox \"f0e779b3bae5ec71195102eaabec4dd24f941d8c30a0e158f4b0eaaf2f9f0add\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"9c2a0f373b44ec690a75b07b0edbe348a9bfc23ae373286b07301dbf99be14cc\"" Jul 2 08:54:42.015823 env[1151]: time="2024-07-02T08:54:42.015739418Z" level=info msg="StartContainer for \"9c2a0f373b44ec690a75b07b0edbe348a9bfc23ae373286b07301dbf99be14cc\"" Jul 2 08:54:42.055194 systemd[1]: Started cri-containerd-9c2a0f373b44ec690a75b07b0edbe348a9bfc23ae373286b07301dbf99be14cc.scope. Jul 2 08:54:42.107392 env[1151]: time="2024-07-02T08:54:42.107341209Z" level=info msg="StartContainer for \"9c2a0f373b44ec690a75b07b0edbe348a9bfc23ae373286b07301dbf99be14cc\" returns successfully" Jul 2 08:54:42.644237 kubelet[1418]: I0702 08:54:42.644146 1418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.110450945 podStartE2EDuration="19.644055926s" podCreationTimestamp="2024-07-02 08:54:23 +0000 UTC" firstStartedPulling="2024-07-02 08:54:41.436740734 +0000 UTC m=+65.953301125" lastFinishedPulling="2024-07-02 08:54:41.970345675 +0000 UTC m=+66.486906106" observedRunningTime="2024-07-02 08:54:42.643754017 +0000 UTC m=+67.160314458" watchObservedRunningTime="2024-07-02 08:54:42.644055926 +0000 UTC m=+67.160616368" Jul 2 08:54:42.705993 systemd-networkd[970]: lxc801e228ac0b0: Gained IPv6LL Jul 2 08:54:42.962660 kubelet[1418]: E0702 08:54:42.962481 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:43.963723 kubelet[1418]: E0702 08:54:43.963640 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:44.965219 kubelet[1418]: E0702 08:54:44.965075 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:45.965687 kubelet[1418]: E0702 08:54:45.965617 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:46.966890 kubelet[1418]: E0702 08:54:46.966766 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:47.967559 kubelet[1418]: E0702 08:54:47.967446 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:48.968305 kubelet[1418]: E0702 08:54:48.968232 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:49.969619 kubelet[1418]: E0702 08:54:49.969567 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:50.970623 kubelet[1418]: E0702 08:54:50.970481 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:51.308769 env[1151]: time="2024-07-02T08:54:51.308642150Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:54:51.321145 env[1151]: time="2024-07-02T08:54:51.321100061Z" level=info msg="StopContainer for \"c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4\" with timeout 2 (s)" Jul 2 08:54:51.321778 env[1151]: time="2024-07-02T08:54:51.321719007Z" level=info msg="Stop container \"c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4\" with signal terminated" Jul 2 08:54:51.334031 systemd-networkd[970]: lxc_health: Link DOWN Jul 2 08:54:51.334042 systemd-networkd[970]: lxc_health: Lost carrier Jul 2 08:54:51.396825 systemd[1]: cri-containerd-c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4.scope: Deactivated successfully. Jul 2 08:54:51.397337 systemd[1]: cri-containerd-c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4.scope: Consumed 9.592s CPU time. Jul 2 08:54:51.435019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4-rootfs.mount: Deactivated successfully. Jul 2 08:54:51.469207 env[1151]: time="2024-07-02T08:54:51.469082700Z" level=info msg="shim disconnected" id=c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4 Jul 2 08:54:51.469207 env[1151]: time="2024-07-02T08:54:51.469188870Z" level=warning msg="cleaning up after shim disconnected" id=c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4 namespace=k8s.io Jul 2 08:54:51.469700 env[1151]: time="2024-07-02T08:54:51.469205702Z" level=info msg="cleaning up dead shim" Jul 2 08:54:51.478869 env[1151]: time="2024-07-02T08:54:51.478770241Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:54:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2887 runtime=io.containerd.runc.v2\n" Jul 2 08:54:51.484274 env[1151]: time="2024-07-02T08:54:51.484203567Z" level=info msg="StopContainer for \"c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4\" returns successfully" Jul 2 08:54:51.485581 env[1151]: time="2024-07-02T08:54:51.485527401Z" level=info msg="StopPodSandbox for \"39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561\"" Jul 2 08:54:51.486032 env[1151]: time="2024-07-02T08:54:51.485984432Z" level=info msg="Container to stop \"87c3ecedbadeb22fa72bfb6e3d3b4943d0e90f1758015996984f58ed5e08ae00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:54:51.486217 env[1151]: time="2024-07-02T08:54:51.486173869Z" level=info msg="Container to stop \"14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:54:51.486479 env[1151]: time="2024-07-02T08:54:51.486437346Z" level=info msg="Container to stop \"c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:54:51.486664 env[1151]: time="2024-07-02T08:54:51.486618917Z" level=info msg="Container to stop \"5f33010776dd6ef368c3dbbb7027f3fcb91184520d53350a7c1b2b898d3de7c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:54:51.486829 env[1151]: time="2024-07-02T08:54:51.486787135Z" level=info msg="Container to stop \"05ebe6aa7c339ed368455996392b054ec87297f9675cda604c28c37dda15e406\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:54:51.491184 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561-shm.mount: Deactivated successfully. Jul 2 08:54:51.503518 systemd[1]: cri-containerd-39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561.scope: Deactivated successfully. Jul 2 08:54:51.541960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561-rootfs.mount: Deactivated successfully. Jul 2 08:54:51.549751 env[1151]: time="2024-07-02T08:54:51.549691331Z" level=info msg="shim disconnected" id=39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561 Jul 2 08:54:51.550104 env[1151]: time="2024-07-02T08:54:51.550083621Z" level=warning msg="cleaning up after shim disconnected" id=39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561 namespace=k8s.io Jul 2 08:54:51.550181 env[1151]: time="2024-07-02T08:54:51.550165705Z" level=info msg="cleaning up dead shim" Jul 2 08:54:51.561737 env[1151]: time="2024-07-02T08:54:51.559884605Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:54:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2918 runtime=io.containerd.runc.v2\n" Jul 2 08:54:51.561737 env[1151]: time="2024-07-02T08:54:51.560775363Z" level=info msg="TearDown network for sandbox \"39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561\" successfully" Jul 2 08:54:51.561737 env[1151]: time="2024-07-02T08:54:51.560801402Z" level=info msg="StopPodSandbox for \"39f1e0bb3f3faa92c880ba860aed663a1dcd3c995e5923ab553a9f524eb11561\" returns successfully" Jul 2 08:54:51.660666 kubelet[1418]: I0702 08:54:51.660604 1418 scope.go:117] "RemoveContainer" containerID="c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4" Jul 2 08:54:51.682473 kubelet[1418]: I0702 08:54:51.682431 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-host-proc-sys-net\") pod \"889aa950-7444-4949-89e6-6576480ffcd9\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " Jul 2 08:54:51.682925 kubelet[1418]: I0702 08:54:51.682895 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/889aa950-7444-4949-89e6-6576480ffcd9-hubble-tls\") pod \"889aa950-7444-4949-89e6-6576480ffcd9\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " Jul 2 08:54:51.683155 kubelet[1418]: I0702 08:54:51.683130 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-cilium-cgroup\") pod \"889aa950-7444-4949-89e6-6576480ffcd9\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " Jul 2 08:54:51.683735 env[1151]: time="2024-07-02T08:54:51.683572043Z" level=info msg="RemoveContainer for \"c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4\"" Jul 2 08:54:51.685398 kubelet[1418]: I0702 08:54:51.685364 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-cni-path\") pod \"889aa950-7444-4949-89e6-6576480ffcd9\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " Jul 2 08:54:51.685671 kubelet[1418]: I0702 08:54:51.685646 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-lib-modules\") pod \"889aa950-7444-4949-89e6-6576480ffcd9\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " Jul 2 08:54:51.685919 kubelet[1418]: I0702 08:54:51.685889 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-xtables-lock\") pod \"889aa950-7444-4949-89e6-6576480ffcd9\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " Jul 2 08:54:51.686497 kubelet[1418]: I0702 08:54:51.686445 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq2kk\" (UniqueName: \"kubernetes.io/projected/889aa950-7444-4949-89e6-6576480ffcd9-kube-api-access-wq2kk\") pod \"889aa950-7444-4949-89e6-6576480ffcd9\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " Jul 2 08:54:51.686637 kubelet[1418]: I0702 08:54:51.686527 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-etc-cni-netd\") pod \"889aa950-7444-4949-89e6-6576480ffcd9\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " Jul 2 08:54:51.686637 kubelet[1418]: I0702 08:54:51.686579 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-cilium-run\") pod \"889aa950-7444-4949-89e6-6576480ffcd9\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " Jul 2 08:54:51.687062 kubelet[1418]: I0702 08:54:51.686638 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/889aa950-7444-4949-89e6-6576480ffcd9-cilium-config-path\") pod \"889aa950-7444-4949-89e6-6576480ffcd9\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " Jul 2 08:54:51.687062 kubelet[1418]: I0702 08:54:51.686716 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-host-proc-sys-kernel\") pod \"889aa950-7444-4949-89e6-6576480ffcd9\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " Jul 2 08:54:51.687062 kubelet[1418]: I0702 08:54:51.686769 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-hostproc\") pod \"889aa950-7444-4949-89e6-6576480ffcd9\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " Jul 2 08:54:51.687062 kubelet[1418]: I0702 08:54:51.686864 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/889aa950-7444-4949-89e6-6576480ffcd9-clustermesh-secrets\") pod \"889aa950-7444-4949-89e6-6576480ffcd9\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " Jul 2 08:54:51.687062 kubelet[1418]: I0702 08:54:51.686959 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-bpf-maps\") pod \"889aa950-7444-4949-89e6-6576480ffcd9\" (UID: \"889aa950-7444-4949-89e6-6576480ffcd9\") " Jul 2 08:54:51.687062 kubelet[1418]: I0702 08:54:51.687050 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "889aa950-7444-4949-89e6-6576480ffcd9" (UID: "889aa950-7444-4949-89e6-6576480ffcd9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:54:51.687440 kubelet[1418]: I0702 08:54:51.686253 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "889aa950-7444-4949-89e6-6576480ffcd9" (UID: "889aa950-7444-4949-89e6-6576480ffcd9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:54:51.687440 kubelet[1418]: I0702 08:54:51.683418 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "889aa950-7444-4949-89e6-6576480ffcd9" (UID: "889aa950-7444-4949-89e6-6576480ffcd9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:54:51.697983 env[1151]: time="2024-07-02T08:54:51.693383047Z" level=info msg="RemoveContainer for \"c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4\" returns successfully" Jul 2 08:54:51.696425 systemd[1]: var-lib-kubelet-pods-889aa950\x2d7444\x2d4949\x2d89e6\x2d6576480ffcd9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwq2kk.mount: Deactivated successfully. Jul 2 08:54:51.698572 kubelet[1418]: I0702 08:54:51.683377 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "889aa950-7444-4949-89e6-6576480ffcd9" (UID: "889aa950-7444-4949-89e6-6576480ffcd9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:54:51.698824 kubelet[1418]: I0702 08:54:51.698779 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-cni-path" (OuterVolumeSpecName: "cni-path") pod "889aa950-7444-4949-89e6-6576480ffcd9" (UID: "889aa950-7444-4949-89e6-6576480ffcd9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:54:51.699094 kubelet[1418]: I0702 08:54:51.699056 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "889aa950-7444-4949-89e6-6576480ffcd9" (UID: "889aa950-7444-4949-89e6-6576480ffcd9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:54:51.699569 kubelet[1418]: I0702 08:54:51.699526 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/889aa950-7444-4949-89e6-6576480ffcd9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "889aa950-7444-4949-89e6-6576480ffcd9" (UID: "889aa950-7444-4949-89e6-6576480ffcd9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:54:51.699763 kubelet[1418]: I0702 08:54:51.699687 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/889aa950-7444-4949-89e6-6576480ffcd9-kube-api-access-wq2kk" (OuterVolumeSpecName: "kube-api-access-wq2kk") pod "889aa950-7444-4949-89e6-6576480ffcd9" (UID: "889aa950-7444-4949-89e6-6576480ffcd9"). InnerVolumeSpecName "kube-api-access-wq2kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:54:51.699935 kubelet[1418]: I0702 08:54:51.699792 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "889aa950-7444-4949-89e6-6576480ffcd9" (UID: "889aa950-7444-4949-89e6-6576480ffcd9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:54:51.700026 kubelet[1418]: I0702 08:54:51.699944 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "889aa950-7444-4949-89e6-6576480ffcd9" (UID: "889aa950-7444-4949-89e6-6576480ffcd9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:54:51.701055 kubelet[1418]: I0702 08:54:51.701020 1418 scope.go:117] "RemoveContainer" containerID="14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773" Jul 2 08:54:51.702027 kubelet[1418]: I0702 08:54:51.701989 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-hostproc" (OuterVolumeSpecName: "hostproc") pod "889aa950-7444-4949-89e6-6576480ffcd9" (UID: "889aa950-7444-4949-89e6-6576480ffcd9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:54:51.702251 kubelet[1418]: I0702 08:54:51.702212 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "889aa950-7444-4949-89e6-6576480ffcd9" (UID: "889aa950-7444-4949-89e6-6576480ffcd9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:54:51.706759 kubelet[1418]: I0702 08:54:51.706679 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/889aa950-7444-4949-89e6-6576480ffcd9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "889aa950-7444-4949-89e6-6576480ffcd9" (UID: "889aa950-7444-4949-89e6-6576480ffcd9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:54:51.708079 env[1151]: time="2024-07-02T08:54:51.707489723Z" level=info msg="RemoveContainer for \"14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773\"" Jul 2 08:54:51.709759 kubelet[1418]: I0702 08:54:51.709691 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/889aa950-7444-4949-89e6-6576480ffcd9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "889aa950-7444-4949-89e6-6576480ffcd9" (UID: "889aa950-7444-4949-89e6-6576480ffcd9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:54:51.713073 env[1151]: time="2024-07-02T08:54:51.713006456Z" level=info msg="RemoveContainer for \"14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773\" returns successfully" Jul 2 08:54:51.713958 kubelet[1418]: I0702 08:54:51.713906 1418 scope.go:117] "RemoveContainer" containerID="05ebe6aa7c339ed368455996392b054ec87297f9675cda604c28c37dda15e406" Jul 2 08:54:51.716634 env[1151]: time="2024-07-02T08:54:51.716580940Z" level=info msg="RemoveContainer for \"05ebe6aa7c339ed368455996392b054ec87297f9675cda604c28c37dda15e406\"" Jul 2 08:54:51.721713 env[1151]: time="2024-07-02T08:54:51.721654659Z" level=info msg="RemoveContainer for \"05ebe6aa7c339ed368455996392b054ec87297f9675cda604c28c37dda15e406\" returns successfully" Jul 2 08:54:51.722301 kubelet[1418]: I0702 08:54:51.722266 1418 scope.go:117] "RemoveContainer" containerID="87c3ecedbadeb22fa72bfb6e3d3b4943d0e90f1758015996984f58ed5e08ae00" Jul 2 08:54:51.725221 env[1151]: time="2024-07-02T08:54:51.725134746Z" level=info msg="RemoveContainer for \"87c3ecedbadeb22fa72bfb6e3d3b4943d0e90f1758015996984f58ed5e08ae00\"" Jul 2 08:54:51.730487 env[1151]: time="2024-07-02T08:54:51.730422026Z" level=info msg="RemoveContainer for \"87c3ecedbadeb22fa72bfb6e3d3b4943d0e90f1758015996984f58ed5e08ae00\" returns successfully" Jul 2 08:54:51.731324 kubelet[1418]: I0702 08:54:51.731117 1418 scope.go:117] "RemoveContainer" containerID="5f33010776dd6ef368c3dbbb7027f3fcb91184520d53350a7c1b2b898d3de7c3" Jul 2 08:54:51.733065 env[1151]: time="2024-07-02T08:54:51.732997829Z" level=info msg="RemoveContainer for \"5f33010776dd6ef368c3dbbb7027f3fcb91184520d53350a7c1b2b898d3de7c3\"" Jul 2 08:54:51.738099 env[1151]: time="2024-07-02T08:54:51.738037543Z" level=info msg="RemoveContainer for \"5f33010776dd6ef368c3dbbb7027f3fcb91184520d53350a7c1b2b898d3de7c3\" returns successfully" Jul 2 08:54:51.738755 kubelet[1418]: I0702 08:54:51.738573 1418 scope.go:117] "RemoveContainer" containerID="c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4" Jul 2 08:54:51.739093 env[1151]: time="2024-07-02T08:54:51.738901992Z" level=error msg="ContainerStatus for \"c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4\": not found" Jul 2 08:54:51.739759 kubelet[1418]: E0702 08:54:51.739397 1418 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4\": not found" containerID="c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4" Jul 2 08:54:51.739759 kubelet[1418]: I0702 08:54:51.739581 1418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4"} err="failed to get container status \"c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"c6f2b06fe12a3b368ad4c688bd32ac21f019e15e37cc5e385c910fb398f8c8f4\": not found" Jul 2 08:54:51.739759 kubelet[1418]: I0702 08:54:51.739610 1418 scope.go:117] "RemoveContainer" containerID="14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773" Jul 2 08:54:51.740059 env[1151]: time="2024-07-02T08:54:51.739808409Z" level=error msg="ContainerStatus for \"14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773\": not found" Jul 2 08:54:51.740527 kubelet[1418]: E0702 08:54:51.740322 1418 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773\": not found" containerID="14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773" Jul 2 08:54:51.740527 kubelet[1418]: I0702 08:54:51.740380 1418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773"} err="failed to get container status \"14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773\": rpc error: code = NotFound desc = an error occurred when try to find container \"14e3239ac819351a4956cd7204e6bc4a86a1a8eabe39d8c6a3b88fade3099773\": not found" Jul 2 08:54:51.740527 kubelet[1418]: I0702 08:54:51.740406 1418 scope.go:117] "RemoveContainer" containerID="05ebe6aa7c339ed368455996392b054ec87297f9675cda604c28c37dda15e406" Jul 2 08:54:51.740774 env[1151]: time="2024-07-02T08:54:51.740599700Z" level=error msg="ContainerStatus for \"05ebe6aa7c339ed368455996392b054ec87297f9675cda604c28c37dda15e406\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05ebe6aa7c339ed368455996392b054ec87297f9675cda604c28c37dda15e406\": not found" Jul 2 08:54:51.741313 kubelet[1418]: E0702 08:54:51.741076 1418 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05ebe6aa7c339ed368455996392b054ec87297f9675cda604c28c37dda15e406\": not found" containerID="05ebe6aa7c339ed368455996392b054ec87297f9675cda604c28c37dda15e406" Jul 2 08:54:51.741313 kubelet[1418]: I0702 08:54:51.741131 1418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"05ebe6aa7c339ed368455996392b054ec87297f9675cda604c28c37dda15e406"} err="failed to get container status \"05ebe6aa7c339ed368455996392b054ec87297f9675cda604c28c37dda15e406\": rpc error: code = NotFound desc = an error occurred when try to find container \"05ebe6aa7c339ed368455996392b054ec87297f9675cda604c28c37dda15e406\": not found" Jul 2 08:54:51.741313 kubelet[1418]: I0702 08:54:51.741151 1418 scope.go:117] "RemoveContainer" containerID="87c3ecedbadeb22fa72bfb6e3d3b4943d0e90f1758015996984f58ed5e08ae00" Jul 2 08:54:51.741586 env[1151]: time="2024-07-02T08:54:51.741376844Z" level=error msg="ContainerStatus for \"87c3ecedbadeb22fa72bfb6e3d3b4943d0e90f1758015996984f58ed5e08ae00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87c3ecedbadeb22fa72bfb6e3d3b4943d0e90f1758015996984f58ed5e08ae00\": not found" Jul 2 08:54:51.742082 kubelet[1418]: E0702 08:54:51.741791 1418 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"87c3ecedbadeb22fa72bfb6e3d3b4943d0e90f1758015996984f58ed5e08ae00\": not found" containerID="87c3ecedbadeb22fa72bfb6e3d3b4943d0e90f1758015996984f58ed5e08ae00" Jul 2 08:54:51.742082 kubelet[1418]: I0702 08:54:51.741929 1418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"87c3ecedbadeb22fa72bfb6e3d3b4943d0e90f1758015996984f58ed5e08ae00"} err="failed to get container status \"87c3ecedbadeb22fa72bfb6e3d3b4943d0e90f1758015996984f58ed5e08ae00\": rpc error: code = NotFound desc = an error occurred when try to find container \"87c3ecedbadeb22fa72bfb6e3d3b4943d0e90f1758015996984f58ed5e08ae00\": not found" Jul 2 08:54:51.742082 kubelet[1418]: I0702 08:54:51.741956 1418 scope.go:117] "RemoveContainer" containerID="5f33010776dd6ef368c3dbbb7027f3fcb91184520d53350a7c1b2b898d3de7c3" Jul 2 08:54:51.742351 env[1151]: time="2024-07-02T08:54:51.742209593Z" level=error msg="ContainerStatus for \"5f33010776dd6ef368c3dbbb7027f3fcb91184520d53350a7c1b2b898d3de7c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f33010776dd6ef368c3dbbb7027f3fcb91184520d53350a7c1b2b898d3de7c3\": not found" Jul 2 08:54:51.742688 kubelet[1418]: E0702 08:54:51.742562 1418 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f33010776dd6ef368c3dbbb7027f3fcb91184520d53350a7c1b2b898d3de7c3\": not found" containerID="5f33010776dd6ef368c3dbbb7027f3fcb91184520d53350a7c1b2b898d3de7c3" Jul 2 08:54:51.742688 kubelet[1418]: I0702 08:54:51.742618 1418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f33010776dd6ef368c3dbbb7027f3fcb91184520d53350a7c1b2b898d3de7c3"} err="failed to get container status \"5f33010776dd6ef368c3dbbb7027f3fcb91184520d53350a7c1b2b898d3de7c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f33010776dd6ef368c3dbbb7027f3fcb91184520d53350a7c1b2b898d3de7c3\": not found" Jul 2 08:54:51.788562 kubelet[1418]: I0702 08:54:51.788115 1418 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-hostproc\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:54:51.788562 kubelet[1418]: I0702 08:54:51.788185 1418 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/889aa950-7444-4949-89e6-6576480ffcd9-clustermesh-secrets\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:54:51.788562 kubelet[1418]: I0702 08:54:51.788212 1418 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-bpf-maps\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:54:51.788562 kubelet[1418]: I0702 08:54:51.788239 1418 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-host-proc-sys-net\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:54:51.788562 kubelet[1418]: I0702 08:54:51.788265 1418 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/889aa950-7444-4949-89e6-6576480ffcd9-hubble-tls\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:54:51.788562 kubelet[1418]: I0702 08:54:51.788290 1418 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-cilium-cgroup\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:54:51.788562 kubelet[1418]: I0702 08:54:51.788315 1418 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-cni-path\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:54:51.788562 kubelet[1418]: I0702 08:54:51.788339 1418 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-lib-modules\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:54:51.789318 kubelet[1418]: I0702 08:54:51.788365 1418 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-xtables-lock\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:54:51.789318 kubelet[1418]: I0702 08:54:51.788394 1418 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wq2kk\" (UniqueName: \"kubernetes.io/projected/889aa950-7444-4949-89e6-6576480ffcd9-kube-api-access-wq2kk\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:54:51.789318 kubelet[1418]: I0702 08:54:51.788419 1418 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-etc-cni-netd\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:54:51.789318 kubelet[1418]: I0702 08:54:51.788445 1418 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-cilium-run\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:54:51.789318 kubelet[1418]: I0702 08:54:51.788472 1418 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/889aa950-7444-4949-89e6-6576480ffcd9-cilium-config-path\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:54:51.789318 kubelet[1418]: I0702 08:54:51.788498 1418 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/889aa950-7444-4949-89e6-6576480ffcd9-host-proc-sys-kernel\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:54:51.964871 systemd[1]: Removed slice kubepods-burstable-pod889aa950_7444_4949_89e6_6576480ffcd9.slice. Jul 2 08:54:51.965105 systemd[1]: kubepods-burstable-pod889aa950_7444_4949_89e6_6576480ffcd9.slice: Consumed 9.710s CPU time. Jul 2 08:54:51.971257 kubelet[1418]: E0702 08:54:51.971209 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:52.088168 kubelet[1418]: E0702 08:54:52.088122 1418 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:54:52.260428 systemd[1]: var-lib-kubelet-pods-889aa950\x2d7444\x2d4949\x2d89e6\x2d6576480ffcd9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 08:54:52.260574 systemd[1]: var-lib-kubelet-pods-889aa950\x2d7444\x2d4949\x2d89e6\x2d6576480ffcd9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 08:54:52.973264 kubelet[1418]: E0702 08:54:52.973165 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:53.155774 kubelet[1418]: I0702 08:54:53.155723 1418 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="889aa950-7444-4949-89e6-6576480ffcd9" path="/var/lib/kubelet/pods/889aa950-7444-4949-89e6-6576480ffcd9/volumes" Jul 2 08:54:53.975254 kubelet[1418]: E0702 08:54:53.975189 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:54.976683 kubelet[1418]: E0702 08:54:54.976416 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:55.977113 kubelet[1418]: E0702 08:54:55.977034 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:56.609426 kubelet[1418]: I0702 08:54:56.609296 1418 topology_manager.go:215] "Topology Admit Handler" podUID="b6ba933c-336d-48bf-91ad-f9882953442c" podNamespace="kube-system" podName="cilium-operator-5cc964979-k49b8" Jul 2 08:54:56.609786 kubelet[1418]: E0702 08:54:56.609474 1418 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="889aa950-7444-4949-89e6-6576480ffcd9" containerName="mount-cgroup" Jul 2 08:54:56.609786 kubelet[1418]: E0702 08:54:56.609542 1418 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="889aa950-7444-4949-89e6-6576480ffcd9" containerName="apply-sysctl-overwrites" Jul 2 08:54:56.609786 kubelet[1418]: E0702 08:54:56.609565 1418 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="889aa950-7444-4949-89e6-6576480ffcd9" containerName="mount-bpf-fs" Jul 2 08:54:56.609786 kubelet[1418]: E0702 08:54:56.609585 1418 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="889aa950-7444-4949-89e6-6576480ffcd9" containerName="clean-cilium-state" Jul 2 08:54:56.609786 kubelet[1418]: E0702 08:54:56.609642 1418 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="889aa950-7444-4949-89e6-6576480ffcd9" containerName="cilium-agent" Jul 2 08:54:56.612333 kubelet[1418]: I0702 08:54:56.610204 1418 memory_manager.go:354] "RemoveStaleState removing state" podUID="889aa950-7444-4949-89e6-6576480ffcd9" containerName="cilium-agent" Jul 2 08:54:56.626502 systemd[1]: Created slice kubepods-besteffort-podb6ba933c_336d_48bf_91ad_f9882953442c.slice. Jul 2 08:54:56.631498 kubelet[1418]: I0702 08:54:56.631410 1418 topology_manager.go:215] "Topology Admit Handler" podUID="0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" podNamespace="kube-system" podName="cilium-xxmv9" Jul 2 08:54:56.647579 systemd[1]: Created slice kubepods-burstable-pod0e2fb3ed_eaae_4ad0_ad22_9428fe918e10.slice. Jul 2 08:54:56.725888 kubelet[1418]: I0702 08:54:56.725774 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-clustermesh-secrets\") pod \"cilium-xxmv9\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " pod="kube-system/cilium-xxmv9" Jul 2 08:54:56.726386 kubelet[1418]: I0702 08:54:56.726329 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-hubble-tls\") pod \"cilium-xxmv9\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " pod="kube-system/cilium-xxmv9" Jul 2 08:54:56.726715 kubelet[1418]: I0702 08:54:56.726689 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-cilium-run\") pod \"cilium-xxmv9\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " pod="kube-system/cilium-xxmv9" Jul 2 08:54:56.727117 kubelet[1418]: I0702 08:54:56.727059 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6ba933c-336d-48bf-91ad-f9882953442c-cilium-config-path\") pod \"cilium-operator-5cc964979-k49b8\" (UID: \"b6ba933c-336d-48bf-91ad-f9882953442c\") " pod="kube-system/cilium-operator-5cc964979-k49b8" Jul 2 08:54:56.727465 kubelet[1418]: I0702 08:54:56.727418 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kprkx\" (UniqueName: \"kubernetes.io/projected/b6ba933c-336d-48bf-91ad-f9882953442c-kube-api-access-kprkx\") pod \"cilium-operator-5cc964979-k49b8\" (UID: \"b6ba933c-336d-48bf-91ad-f9882953442c\") " pod="kube-system/cilium-operator-5cc964979-k49b8" Jul 2 08:54:56.727867 kubelet[1418]: I0702 08:54:56.727764 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-bpf-maps\") pod \"cilium-xxmv9\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " pod="kube-system/cilium-xxmv9" Jul 2 08:54:56.728186 kubelet[1418]: I0702 08:54:56.728159 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-cilium-cgroup\") pod \"cilium-xxmv9\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " pod="kube-system/cilium-xxmv9" Jul 2 08:54:56.728543 kubelet[1418]: I0702 08:54:56.728490 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-xtables-lock\") pod \"cilium-xxmv9\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " pod="kube-system/cilium-xxmv9" Jul 2 08:54:56.728930 kubelet[1418]: I0702 08:54:56.728900 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-cni-path\") pod \"cilium-xxmv9\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " pod="kube-system/cilium-xxmv9" Jul 2 08:54:56.729272 kubelet[1418]: I0702 08:54:56.729246 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-cilium-ipsec-secrets\") pod \"cilium-xxmv9\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " pod="kube-system/cilium-xxmv9" Jul 2 08:54:56.729585 kubelet[1418]: I0702 08:54:56.729560 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-host-proc-sys-net\") pod \"cilium-xxmv9\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " pod="kube-system/cilium-xxmv9" Jul 2 08:54:56.729929 kubelet[1418]: I0702 08:54:56.729902 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wctfn\" (UniqueName: \"kubernetes.io/projected/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-kube-api-access-wctfn\") pod \"cilium-xxmv9\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " pod="kube-system/cilium-xxmv9" Jul 2 08:54:56.730243 kubelet[1418]: I0702 08:54:56.730218 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-hostproc\") pod \"cilium-xxmv9\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " pod="kube-system/cilium-xxmv9" Jul 2 08:54:56.730574 kubelet[1418]: I0702 08:54:56.730548 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-lib-modules\") pod \"cilium-xxmv9\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " pod="kube-system/cilium-xxmv9" Jul 2 08:54:56.730931 kubelet[1418]: I0702 08:54:56.730879 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-cilium-config-path\") pod \"cilium-xxmv9\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " pod="kube-system/cilium-xxmv9" Jul 2 08:54:56.731215 kubelet[1418]: I0702 08:54:56.731190 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-etc-cni-netd\") pod \"cilium-xxmv9\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " pod="kube-system/cilium-xxmv9" Jul 2 08:54:56.731535 kubelet[1418]: I0702 08:54:56.731509 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-host-proc-sys-kernel\") pod \"cilium-xxmv9\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " pod="kube-system/cilium-xxmv9" Jul 2 08:54:56.902408 kubelet[1418]: E0702 08:54:56.898444 1418 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:56.938116 env[1151]: time="2024-07-02T08:54:56.938045127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-k49b8,Uid:b6ba933c-336d-48bf-91ad-f9882953442c,Namespace:kube-system,Attempt:0,}" Jul 2 08:54:56.957983 env[1151]: time="2024-07-02T08:54:56.957931240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xxmv9,Uid:0e2fb3ed-eaae-4ad0-ad22-9428fe918e10,Namespace:kube-system,Attempt:0,}" Jul 2 08:54:56.966623 env[1151]: time="2024-07-02T08:54:56.966442226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:54:56.966920 env[1151]: time="2024-07-02T08:54:56.966577812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:54:56.966920 env[1151]: time="2024-07-02T08:54:56.966611585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:54:56.967309 env[1151]: time="2024-07-02T08:54:56.967190975Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e831e71e4b8fffa430751cf28121ae92ec42f3261253e9739df6626c9d0696c pid=2948 runtime=io.containerd.runc.v2 Jul 2 08:54:56.978661 kubelet[1418]: E0702 08:54:56.978556 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:56.984173 env[1151]: time="2024-07-02T08:54:56.983758130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:54:56.984173 env[1151]: time="2024-07-02T08:54:56.983826900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:54:56.984173 env[1151]: time="2024-07-02T08:54:56.983919795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:54:56.984992 env[1151]: time="2024-07-02T08:54:56.984734619Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c58e56a6637e75001434ba27b41e1825466bcb5116bd231e741d2036b29f9e2 pid=2971 runtime=io.containerd.runc.v2 Jul 2 08:54:56.988336 systemd[1]: Started cri-containerd-8e831e71e4b8fffa430751cf28121ae92ec42f3261253e9739df6626c9d0696c.scope. Jul 2 08:54:57.013401 systemd[1]: Started cri-containerd-5c58e56a6637e75001434ba27b41e1825466bcb5116bd231e741d2036b29f9e2.scope. Jul 2 08:54:57.059339 env[1151]: time="2024-07-02T08:54:57.059276163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xxmv9,Uid:0e2fb3ed-eaae-4ad0-ad22-9428fe918e10,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c58e56a6637e75001434ba27b41e1825466bcb5116bd231e741d2036b29f9e2\"" Jul 2 08:54:57.062809 env[1151]: time="2024-07-02T08:54:57.062765453Z" level=info msg="CreateContainer within sandbox \"5c58e56a6637e75001434ba27b41e1825466bcb5116bd231e741d2036b29f9e2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:54:57.084114 env[1151]: time="2024-07-02T08:54:57.084040057Z" level=info msg="CreateContainer within sandbox \"5c58e56a6637e75001434ba27b41e1825466bcb5116bd231e741d2036b29f9e2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d\"" Jul 2 08:54:57.085224 env[1151]: time="2024-07-02T08:54:57.084888705Z" level=info msg="StartContainer for \"2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d\"" Jul 2 08:54:57.087257 env[1151]: time="2024-07-02T08:54:57.087209845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-k49b8,Uid:b6ba933c-336d-48bf-91ad-f9882953442c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e831e71e4b8fffa430751cf28121ae92ec42f3261253e9739df6626c9d0696c\"" Jul 2 08:54:57.090218 kubelet[1418]: E0702 08:54:57.090179 1418 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:54:57.090574 env[1151]: time="2024-07-02T08:54:57.090529866Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 08:54:57.110354 systemd[1]: Started cri-containerd-2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d.scope. Jul 2 08:54:57.125703 systemd[1]: cri-containerd-2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d.scope: Deactivated successfully. Jul 2 08:54:57.153926 env[1151]: time="2024-07-02T08:54:57.152147234Z" level=info msg="shim disconnected" id=2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d Jul 2 08:54:57.153926 env[1151]: time="2024-07-02T08:54:57.152299341Z" level=warning msg="cleaning up after shim disconnected" id=2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d namespace=k8s.io Jul 2 08:54:57.153926 env[1151]: time="2024-07-02T08:54:57.152314219Z" level=info msg="cleaning up dead shim" Jul 2 08:54:57.161771 env[1151]: time="2024-07-02T08:54:57.161690101Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:54:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3047 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T08:54:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 08:54:57.162206 env[1151]: time="2024-07-02T08:54:57.162073743Z" level=error msg="copy shim log" error="read /proc/self/fd/63: file already closed" Jul 2 08:54:57.163006 env[1151]: time="2024-07-02T08:54:57.162947989Z" level=error msg="Failed to pipe stdout of container \"2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d\"" error="reading from a closed fifo" Jul 2 08:54:57.163139 env[1151]: time="2024-07-02T08:54:57.163110665Z" level=error msg="Failed to pipe stderr of container \"2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d\"" error="reading from a closed fifo" Jul 2 08:54:57.166705 env[1151]: time="2024-07-02T08:54:57.166630723Z" level=error msg="StartContainer for \"2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 08:54:57.167001 kubelet[1418]: E0702 08:54:57.166964 1418 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d" Jul 2 08:54:57.168961 kubelet[1418]: E0702 08:54:57.168832 1418 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 08:54:57.168961 kubelet[1418]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 08:54:57.168961 kubelet[1418]: rm /hostbin/cilium-mount Jul 2 08:54:57.169145 kubelet[1418]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wctfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-xxmv9_kube-system(0e2fb3ed-eaae-4ad0-ad22-9428fe918e10): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 08:54:57.169145 kubelet[1418]: E0702 08:54:57.168943 1418 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-xxmv9" podUID="0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" Jul 2 08:54:57.686829 env[1151]: time="2024-07-02T08:54:57.686680535Z" level=info msg="CreateContainer within sandbox \"5c58e56a6637e75001434ba27b41e1825466bcb5116bd231e741d2036b29f9e2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Jul 2 08:54:57.712371 env[1151]: time="2024-07-02T08:54:57.710780429Z" level=info msg="CreateContainer within sandbox \"5c58e56a6637e75001434ba27b41e1825466bcb5116bd231e741d2036b29f9e2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"07fb1acbb75050370741b4a3c8019a41af1ecce49c5763dc41a3380cda9a4357\"" Jul 2 08:54:57.712920 env[1151]: time="2024-07-02T08:54:57.712772329Z" level=info msg="StartContainer for \"07fb1acbb75050370741b4a3c8019a41af1ecce49c5763dc41a3380cda9a4357\"" Jul 2 08:54:57.751333 systemd[1]: Started cri-containerd-07fb1acbb75050370741b4a3c8019a41af1ecce49c5763dc41a3380cda9a4357.scope. Jul 2 08:54:57.775226 systemd[1]: cri-containerd-07fb1acbb75050370741b4a3c8019a41af1ecce49c5763dc41a3380cda9a4357.scope: Deactivated successfully. Jul 2 08:54:57.791759 env[1151]: time="2024-07-02T08:54:57.791667017Z" level=info msg="shim disconnected" id=07fb1acbb75050370741b4a3c8019a41af1ecce49c5763dc41a3380cda9a4357 Jul 2 08:54:57.792301 env[1151]: time="2024-07-02T08:54:57.792258029Z" level=warning msg="cleaning up after shim disconnected" id=07fb1acbb75050370741b4a3c8019a41af1ecce49c5763dc41a3380cda9a4357 namespace=k8s.io Jul 2 08:54:57.792460 env[1151]: time="2024-07-02T08:54:57.792426907Z" level=info msg="cleaning up dead shim" Jul 2 08:54:57.808455 env[1151]: time="2024-07-02T08:54:57.808368802Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:54:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3084 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T08:54:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/07fb1acbb75050370741b4a3c8019a41af1ecce49c5763dc41a3380cda9a4357/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 08:54:57.809301 env[1151]: time="2024-07-02T08:54:57.809165843Z" level=error msg="copy shim log" error="read /proc/self/fd/75: file already closed" Jul 2 08:54:57.812648 env[1151]: time="2024-07-02T08:54:57.809911646Z" level=error msg="Failed to pipe stderr of container \"07fb1acbb75050370741b4a3c8019a41af1ecce49c5763dc41a3380cda9a4357\"" error="reading from a closed fifo" Jul 2 08:54:57.813168 env[1151]: time="2024-07-02T08:54:57.812147907Z" level=error msg="Failed to pipe stdout of container \"07fb1acbb75050370741b4a3c8019a41af1ecce49c5763dc41a3380cda9a4357\"" error="reading from a closed fifo" Jul 2 08:54:57.817295 env[1151]: time="2024-07-02T08:54:57.817251515Z" level=error msg="StartContainer for \"07fb1acbb75050370741b4a3c8019a41af1ecce49c5763dc41a3380cda9a4357\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 08:54:57.817716 kubelet[1418]: E0702 08:54:57.817672 1418 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="07fb1acbb75050370741b4a3c8019a41af1ecce49c5763dc41a3380cda9a4357" Jul 2 08:54:57.818348 kubelet[1418]: E0702 08:54:57.818312 1418 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 08:54:57.818348 kubelet[1418]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 08:54:57.818348 kubelet[1418]: rm /hostbin/cilium-mount Jul 2 08:54:57.818348 kubelet[1418]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wctfn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-xxmv9_kube-system(0e2fb3ed-eaae-4ad0-ad22-9428fe918e10): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 08:54:57.818618 kubelet[1418]: E0702 08:54:57.818389 1418 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-xxmv9" podUID="0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" Jul 2 08:54:57.979304 kubelet[1418]: E0702 08:54:57.979175 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:58.629010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3637649069.mount: Deactivated successfully. Jul 2 08:54:58.695004 kubelet[1418]: I0702 08:54:58.694943 1418 scope.go:117] "RemoveContainer" containerID="2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d" Jul 2 08:54:58.695394 kubelet[1418]: I0702 08:54:58.695356 1418 scope.go:117] "RemoveContainer" containerID="2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d" Jul 2 08:54:58.697827 env[1151]: time="2024-07-02T08:54:58.697761692Z" level=info msg="RemoveContainer for \"2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d\"" Jul 2 08:54:58.698601 env[1151]: time="2024-07-02T08:54:58.697790837Z" level=info msg="RemoveContainer for \"2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d\"" Jul 2 08:54:58.699242 env[1151]: time="2024-07-02T08:54:58.699070045Z" level=error msg="RemoveContainer for \"2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d\" failed" error="failed to set removing state for container \"2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d\": container is already in removing state" Jul 2 08:54:58.699997 kubelet[1418]: E0702 08:54:58.699832 1418 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d\": container is already in removing state" containerID="2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d" Jul 2 08:54:58.699997 kubelet[1418]: I0702 08:54:58.699964 1418 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d"} err="rpc error: code = Unknown desc = failed to set removing state for container \"2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d\": container is already in removing state" Jul 2 08:54:58.781265 env[1151]: time="2024-07-02T08:54:58.781115689Z" level=info msg="RemoveContainer for \"2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d\" returns successfully" Jul 2 08:54:58.782281 kubelet[1418]: E0702 08:54:58.782223 1418 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-xxmv9_kube-system(0e2fb3ed-eaae-4ad0-ad22-9428fe918e10)\"" pod="kube-system/cilium-xxmv9" podUID="0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" Jul 2 08:54:58.980464 kubelet[1418]: E0702 08:54:58.980351 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:54:59.391975 kubelet[1418]: I0702 08:54:59.391439 1418 setters.go:568] "Node became not ready" node="172.24.4.136" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T08:54:59Z","lastTransitionTime":"2024-07-02T08:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 08:54:59.735137 env[1151]: time="2024-07-02T08:54:59.729499311Z" level=info msg="StopPodSandbox for \"5c58e56a6637e75001434ba27b41e1825466bcb5116bd231e741d2036b29f9e2\"" Jul 2 08:54:59.735137 env[1151]: time="2024-07-02T08:54:59.729735005Z" level=info msg="Container to stop \"07fb1acbb75050370741b4a3c8019a41af1ecce49c5763dc41a3380cda9a4357\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 08:54:59.732106 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5c58e56a6637e75001434ba27b41e1825466bcb5116bd231e741d2036b29f9e2-shm.mount: Deactivated successfully. Jul 2 08:54:59.762480 systemd[1]: cri-containerd-5c58e56a6637e75001434ba27b41e1825466bcb5116bd231e741d2036b29f9e2.scope: Deactivated successfully. Jul 2 08:54:59.798158 env[1151]: time="2024-07-02T08:54:59.798079165Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:54:59.804381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c58e56a6637e75001434ba27b41e1825466bcb5116bd231e741d2036b29f9e2-rootfs.mount: Deactivated successfully. Jul 2 08:54:59.805475 env[1151]: time="2024-07-02T08:54:59.805407078Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:54:59.982140 kubelet[1418]: E0702 08:54:59.982047 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:00.180157 env[1151]: time="2024-07-02T08:55:00.179949869Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 08:55:00.182411 env[1151]: time="2024-07-02T08:55:00.182343004Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 08:55:00.186988 env[1151]: time="2024-07-02T08:55:00.186930048Z" level=info msg="shim disconnected" id=5c58e56a6637e75001434ba27b41e1825466bcb5116bd231e741d2036b29f9e2 Jul 2 08:55:00.187212 env[1151]: time="2024-07-02T08:55:00.187182764Z" level=warning msg="cleaning up after shim disconnected" id=5c58e56a6637e75001434ba27b41e1825466bcb5116bd231e741d2036b29f9e2 namespace=k8s.io Jul 2 08:55:00.187290 env[1151]: time="2024-07-02T08:55:00.187272151Z" level=info msg="cleaning up dead shim" Jul 2 08:55:00.200206 env[1151]: time="2024-07-02T08:55:00.200117800Z" level=info msg="CreateContainer within sandbox \"8e831e71e4b8fffa430751cf28121ae92ec42f3261253e9739df6626c9d0696c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 08:55:00.218372 env[1151]: time="2024-07-02T08:55:00.218303439Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:55:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3117 runtime=io.containerd.runc.v2\n" Jul 2 08:55:00.218909 env[1151]: time="2024-07-02T08:55:00.218807728Z" level=info msg="TearDown network for sandbox \"5c58e56a6637e75001434ba27b41e1825466bcb5116bd231e741d2036b29f9e2\" successfully" Jul 2 08:55:00.218988 env[1151]: time="2024-07-02T08:55:00.218904591Z" level=info msg="StopPodSandbox for \"5c58e56a6637e75001434ba27b41e1825466bcb5116bd231e741d2036b29f9e2\" returns successfully" Jul 2 08:55:00.243934 env[1151]: time="2024-07-02T08:55:00.243880741Z" level=info msg="CreateContainer within sandbox \"8e831e71e4b8fffa430751cf28121ae92ec42f3261253e9739df6626c9d0696c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"193065f37bfbbb2fa25e835493708ab4ed639ebbf52887e2a18a754e3143d5c3\"" Jul 2 08:55:00.245016 env[1151]: time="2024-07-02T08:55:00.244988335Z" level=info msg="StartContainer for \"193065f37bfbbb2fa25e835493708ab4ed639ebbf52887e2a18a754e3143d5c3\"" Jul 2 08:55:00.264934 kubelet[1418]: W0702 08:55:00.264742 1418 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e2fb3ed_eaae_4ad0_ad22_9428fe918e10.slice/cri-containerd-2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d.scope WatchSource:0}: container "2966bcf9cb41146663587cbc9850f14591be9fab3786908ca4e1fddc32e4315d" in namespace "k8s.io": not found Jul 2 08:55:00.289404 systemd[1]: Started cri-containerd-193065f37bfbbb2fa25e835493708ab4ed639ebbf52887e2a18a754e3143d5c3.scope. Jul 2 08:55:00.343317 env[1151]: time="2024-07-02T08:55:00.343245926Z" level=info msg="StartContainer for \"193065f37bfbbb2fa25e835493708ab4ed639ebbf52887e2a18a754e3143d5c3\" returns successfully" Jul 2 08:55:00.363763 kubelet[1418]: I0702 08:55:00.363689 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-cilium-config-path\") pod \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " Jul 2 08:55:00.363763 kubelet[1418]: I0702 08:55:00.363751 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-etc-cni-netd\") pod \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " Jul 2 08:55:00.364015 kubelet[1418]: I0702 08:55:00.363783 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-bpf-maps\") pod \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " Jul 2 08:55:00.364015 kubelet[1418]: I0702 08:55:00.363812 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wctfn\" (UniqueName: \"kubernetes.io/projected/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-kube-api-access-wctfn\") pod \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " Jul 2 08:55:00.364015 kubelet[1418]: I0702 08:55:00.363853 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-lib-modules\") pod \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " Jul 2 08:55:00.364015 kubelet[1418]: I0702 08:55:00.363878 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-cni-path\") pod \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " Jul 2 08:55:00.364015 kubelet[1418]: I0702 08:55:00.363903 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-host-proc-sys-net\") pod \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " Jul 2 08:55:00.364015 kubelet[1418]: I0702 08:55:00.363931 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-cilium-ipsec-secrets\") pod \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " Jul 2 08:55:00.364015 kubelet[1418]: I0702 08:55:00.363956 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-cilium-cgroup\") pod \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " Jul 2 08:55:00.364015 kubelet[1418]: I0702 08:55:00.363981 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-xtables-lock\") pod \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " Jul 2 08:55:00.364015 kubelet[1418]: I0702 08:55:00.364005 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-hubble-tls\") pod \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " Jul 2 08:55:00.364015 kubelet[1418]: I0702 08:55:00.364027 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-hostproc\") pod \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " Jul 2 08:55:00.364518 kubelet[1418]: I0702 08:55:00.364054 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-host-proc-sys-kernel\") pod \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " Jul 2 08:55:00.364518 kubelet[1418]: I0702 08:55:00.364091 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-clustermesh-secrets\") pod \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " Jul 2 08:55:00.364518 kubelet[1418]: I0702 08:55:00.364114 1418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-cilium-run\") pod \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\" (UID: \"0e2fb3ed-eaae-4ad0-ad22-9428fe918e10\") " Jul 2 08:55:00.364518 kubelet[1418]: I0702 08:55:00.364181 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" (UID: "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:55:00.365382 kubelet[1418]: I0702 08:55:00.365305 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" (UID: "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:55:00.365382 kubelet[1418]: I0702 08:55:00.365346 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" (UID: "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:55:00.367443 kubelet[1418]: I0702 08:55:00.365620 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" (UID: "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:55:00.367443 kubelet[1418]: I0702 08:55:00.365763 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" (UID: "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:55:00.367443 kubelet[1418]: I0702 08:55:00.366775 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-hostproc" (OuterVolumeSpecName: "hostproc") pod "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" (UID: "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:55:00.367443 kubelet[1418]: I0702 08:55:00.366786 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" (UID: "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:55:00.367443 kubelet[1418]: I0702 08:55:00.366811 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" (UID: "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:55:00.367443 kubelet[1418]: I0702 08:55:00.366918 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-cni-path" (OuterVolumeSpecName: "cni-path") pod "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" (UID: "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:55:00.367443 kubelet[1418]: I0702 08:55:00.366979 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" (UID: "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 08:55:00.371903 kubelet[1418]: I0702 08:55:00.371796 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" (UID: "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 08:55:00.380198 kubelet[1418]: I0702 08:55:00.380141 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-kube-api-access-wctfn" (OuterVolumeSpecName: "kube-api-access-wctfn") pod "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" (UID: "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10"). InnerVolumeSpecName "kube-api-access-wctfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:55:00.380631 kubelet[1418]: I0702 08:55:00.380536 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" (UID: "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 08:55:00.382499 kubelet[1418]: I0702 08:55:00.382374 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" (UID: "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:55:00.387925 kubelet[1418]: I0702 08:55:00.387887 1418 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" (UID: "0e2fb3ed-eaae-4ad0-ad22-9428fe918e10"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 08:55:00.465066 kubelet[1418]: I0702 08:55:00.464936 1418 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-cni-path\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:55:00.465066 kubelet[1418]: I0702 08:55:00.464983 1418 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-host-proc-sys-net\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:55:00.465066 kubelet[1418]: I0702 08:55:00.464998 1418 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wctfn\" (UniqueName: \"kubernetes.io/projected/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-kube-api-access-wctfn\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:55:00.465066 kubelet[1418]: I0702 08:55:00.465012 1418 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-lib-modules\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:55:00.465066 kubelet[1418]: I0702 08:55:00.465027 1418 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-cilium-cgroup\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:55:00.465066 kubelet[1418]: I0702 08:55:00.465040 1418 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-cilium-ipsec-secrets\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:55:00.465066 kubelet[1418]: I0702 08:55:00.465051 1418 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-hubble-tls\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:55:00.465066 kubelet[1418]: I0702 08:55:00.465063 1418 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-xtables-lock\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:55:00.465456 kubelet[1418]: I0702 08:55:00.465078 1418 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-host-proc-sys-kernel\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:55:00.465456 kubelet[1418]: I0702 08:55:00.465090 1418 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-clustermesh-secrets\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:55:00.465456 kubelet[1418]: I0702 08:55:00.465101 1418 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-hostproc\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:55:00.465456 kubelet[1418]: I0702 08:55:00.465113 1418 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-cilium-run\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:55:00.465456 kubelet[1418]: I0702 08:55:00.465125 1418 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-bpf-maps\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:55:00.465456 kubelet[1418]: I0702 08:55:00.465137 1418 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-cilium-config-path\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:55:00.465456 kubelet[1418]: I0702 08:55:00.465150 1418 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10-etc-cni-netd\") on node \"172.24.4.136\" DevicePath \"\"" Jul 2 08:55:00.733900 systemd[1]: var-lib-kubelet-pods-0e2fb3ed\x2deaae\x2d4ad0\x2dad22\x2d9428fe918e10-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 08:55:00.734108 systemd[1]: var-lib-kubelet-pods-0e2fb3ed\x2deaae\x2d4ad0\x2dad22\x2d9428fe918e10-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwctfn.mount: Deactivated successfully. Jul 2 08:55:00.734252 systemd[1]: var-lib-kubelet-pods-0e2fb3ed\x2deaae\x2d4ad0\x2dad22\x2d9428fe918e10-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 08:55:00.734397 systemd[1]: var-lib-kubelet-pods-0e2fb3ed\x2deaae\x2d4ad0\x2dad22\x2d9428fe918e10-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 08:55:00.742086 kubelet[1418]: I0702 08:55:00.742044 1418 scope.go:117] "RemoveContainer" containerID="07fb1acbb75050370741b4a3c8019a41af1ecce49c5763dc41a3380cda9a4357" Jul 2 08:55:00.744900 env[1151]: time="2024-07-02T08:55:00.744781188Z" level=info msg="RemoveContainer for \"07fb1acbb75050370741b4a3c8019a41af1ecce49c5763dc41a3380cda9a4357\"" Jul 2 08:55:00.747369 systemd[1]: Removed slice kubepods-burstable-pod0e2fb3ed_eaae_4ad0_ad22_9428fe918e10.slice. Jul 2 08:55:00.756060 env[1151]: time="2024-07-02T08:55:00.755990586Z" level=info msg="RemoveContainer for \"07fb1acbb75050370741b4a3c8019a41af1ecce49c5763dc41a3380cda9a4357\" returns successfully" Jul 2 08:55:00.853766 kubelet[1418]: I0702 08:55:00.853713 1418 topology_manager.go:215] "Topology Admit Handler" podUID="f84e14b6-a4de-4291-a9dd-10ebdeae3003" podNamespace="kube-system" podName="cilium-nxrt7" Jul 2 08:55:00.854248 kubelet[1418]: E0702 08:55:00.854205 1418 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" containerName="mount-cgroup" Jul 2 08:55:00.854546 kubelet[1418]: I0702 08:55:00.854505 1418 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" containerName="mount-cgroup" Jul 2 08:55:00.854769 kubelet[1418]: I0702 08:55:00.854733 1418 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" containerName="mount-cgroup" Jul 2 08:55:00.855140 kubelet[1418]: E0702 08:55:00.855095 1418 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" containerName="mount-cgroup" Jul 2 08:55:00.867198 systemd[1]: Created slice kubepods-burstable-podf84e14b6_a4de_4291_a9dd_10ebdeae3003.slice. Jul 2 08:55:00.888772 kubelet[1418]: I0702 08:55:00.888703 1418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-k49b8" podStartSLOduration=1.792935136 podStartE2EDuration="4.888596689s" podCreationTimestamp="2024-07-02 08:54:56 +0000 UTC" firstStartedPulling="2024-07-02 08:54:57.08906107 +0000 UTC m=+81.605621471" lastFinishedPulling="2024-07-02 08:55:00.184722593 +0000 UTC m=+84.701283024" observedRunningTime="2024-07-02 08:55:00.856353592 +0000 UTC m=+85.372913993" watchObservedRunningTime="2024-07-02 08:55:00.888596689 +0000 UTC m=+85.405157121" Jul 2 08:55:00.969494 kubelet[1418]: I0702 08:55:00.969453 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f84e14b6-a4de-4291-a9dd-10ebdeae3003-xtables-lock\") pod \"cilium-nxrt7\" (UID: \"f84e14b6-a4de-4291-a9dd-10ebdeae3003\") " pod="kube-system/cilium-nxrt7" Jul 2 08:55:00.969494 kubelet[1418]: I0702 08:55:00.969501 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f84e14b6-a4de-4291-a9dd-10ebdeae3003-host-proc-sys-net\") pod \"cilium-nxrt7\" (UID: \"f84e14b6-a4de-4291-a9dd-10ebdeae3003\") " pod="kube-system/cilium-nxrt7" Jul 2 08:55:00.969784 kubelet[1418]: I0702 08:55:00.969543 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f84e14b6-a4de-4291-a9dd-10ebdeae3003-bpf-maps\") pod \"cilium-nxrt7\" (UID: \"f84e14b6-a4de-4291-a9dd-10ebdeae3003\") " pod="kube-system/cilium-nxrt7" Jul 2 08:55:00.969784 kubelet[1418]: I0702 08:55:00.969565 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f84e14b6-a4de-4291-a9dd-10ebdeae3003-hostproc\") pod \"cilium-nxrt7\" (UID: \"f84e14b6-a4de-4291-a9dd-10ebdeae3003\") " pod="kube-system/cilium-nxrt7" Jul 2 08:55:00.969784 kubelet[1418]: I0702 08:55:00.969595 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f84e14b6-a4de-4291-a9dd-10ebdeae3003-etc-cni-netd\") pod \"cilium-nxrt7\" (UID: \"f84e14b6-a4de-4291-a9dd-10ebdeae3003\") " pod="kube-system/cilium-nxrt7" Jul 2 08:55:00.969784 kubelet[1418]: I0702 08:55:00.969624 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f84e14b6-a4de-4291-a9dd-10ebdeae3003-cilium-ipsec-secrets\") pod \"cilium-nxrt7\" (UID: \"f84e14b6-a4de-4291-a9dd-10ebdeae3003\") " pod="kube-system/cilium-nxrt7" Jul 2 08:55:00.969784 kubelet[1418]: I0702 08:55:00.969648 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f84e14b6-a4de-4291-a9dd-10ebdeae3003-cilium-run\") pod \"cilium-nxrt7\" (UID: \"f84e14b6-a4de-4291-a9dd-10ebdeae3003\") " pod="kube-system/cilium-nxrt7" Jul 2 08:55:00.969784 kubelet[1418]: I0702 08:55:00.969670 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f84e14b6-a4de-4291-a9dd-10ebdeae3003-cilium-cgroup\") pod \"cilium-nxrt7\" (UID: \"f84e14b6-a4de-4291-a9dd-10ebdeae3003\") " pod="kube-system/cilium-nxrt7" Jul 2 08:55:00.969784 kubelet[1418]: I0702 08:55:00.969693 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f84e14b6-a4de-4291-a9dd-10ebdeae3003-cni-path\") pod \"cilium-nxrt7\" (UID: \"f84e14b6-a4de-4291-a9dd-10ebdeae3003\") " pod="kube-system/cilium-nxrt7" Jul 2 08:55:00.969784 kubelet[1418]: I0702 08:55:00.969715 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f84e14b6-a4de-4291-a9dd-10ebdeae3003-lib-modules\") pod \"cilium-nxrt7\" (UID: \"f84e14b6-a4de-4291-a9dd-10ebdeae3003\") " pod="kube-system/cilium-nxrt7" Jul 2 08:55:00.969784 kubelet[1418]: I0702 08:55:00.969737 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f84e14b6-a4de-4291-a9dd-10ebdeae3003-cilium-config-path\") pod \"cilium-nxrt7\" (UID: \"f84e14b6-a4de-4291-a9dd-10ebdeae3003\") " pod="kube-system/cilium-nxrt7" Jul 2 08:55:00.969784 kubelet[1418]: I0702 08:55:00.969760 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4flh\" (UniqueName: \"kubernetes.io/projected/f84e14b6-a4de-4291-a9dd-10ebdeae3003-kube-api-access-z4flh\") pod \"cilium-nxrt7\" (UID: \"f84e14b6-a4de-4291-a9dd-10ebdeae3003\") " pod="kube-system/cilium-nxrt7" Jul 2 08:55:00.969784 kubelet[1418]: I0702 08:55:00.969785 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f84e14b6-a4de-4291-a9dd-10ebdeae3003-clustermesh-secrets\") pod \"cilium-nxrt7\" (UID: \"f84e14b6-a4de-4291-a9dd-10ebdeae3003\") " pod="kube-system/cilium-nxrt7" Jul 2 08:55:00.970470 kubelet[1418]: I0702 08:55:00.969807 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f84e14b6-a4de-4291-a9dd-10ebdeae3003-hubble-tls\") pod \"cilium-nxrt7\" (UID: \"f84e14b6-a4de-4291-a9dd-10ebdeae3003\") " pod="kube-system/cilium-nxrt7" Jul 2 08:55:00.970470 kubelet[1418]: I0702 08:55:00.969829 1418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f84e14b6-a4de-4291-a9dd-10ebdeae3003-host-proc-sys-kernel\") pod \"cilium-nxrt7\" (UID: \"f84e14b6-a4de-4291-a9dd-10ebdeae3003\") " pod="kube-system/cilium-nxrt7" Jul 2 08:55:00.982380 kubelet[1418]: E0702 08:55:00.982299 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:01.153423 kubelet[1418]: I0702 08:55:01.151266 1418 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0e2fb3ed-eaae-4ad0-ad22-9428fe918e10" path="/var/lib/kubelet/pods/0e2fb3ed-eaae-4ad0-ad22-9428fe918e10/volumes" Jul 2 08:55:01.183592 env[1151]: time="2024-07-02T08:55:01.183124294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nxrt7,Uid:f84e14b6-a4de-4291-a9dd-10ebdeae3003,Namespace:kube-system,Attempt:0,}" Jul 2 08:55:01.203752 env[1151]: time="2024-07-02T08:55:01.203628484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:55:01.204019 env[1151]: time="2024-07-02T08:55:01.203988432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:55:01.204161 env[1151]: time="2024-07-02T08:55:01.204123084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:55:01.204579 env[1151]: time="2024-07-02T08:55:01.204527195Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/46cb9d5a2eb4acd6e05a2b131617673b69c246233d53e2a93d98ad93a16f50f3 pid=3187 runtime=io.containerd.runc.v2 Jul 2 08:55:01.230504 systemd[1]: Started cri-containerd-46cb9d5a2eb4acd6e05a2b131617673b69c246233d53e2a93d98ad93a16f50f3.scope. Jul 2 08:55:01.274309 env[1151]: time="2024-07-02T08:55:01.274228336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nxrt7,Uid:f84e14b6-a4de-4291-a9dd-10ebdeae3003,Namespace:kube-system,Attempt:0,} returns sandbox id \"46cb9d5a2eb4acd6e05a2b131617673b69c246233d53e2a93d98ad93a16f50f3\"" Jul 2 08:55:01.278540 env[1151]: time="2024-07-02T08:55:01.278450332Z" level=info msg="CreateContainer within sandbox \"46cb9d5a2eb4acd6e05a2b131617673b69c246233d53e2a93d98ad93a16f50f3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 08:55:01.302905 env[1151]: time="2024-07-02T08:55:01.302811670Z" level=info msg="CreateContainer within sandbox \"46cb9d5a2eb4acd6e05a2b131617673b69c246233d53e2a93d98ad93a16f50f3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"56f3ceeeab0295406720aa70b3bb41d24f5cc548504f614b17e4fd2231874bd2\"" Jul 2 08:55:01.303630 env[1151]: time="2024-07-02T08:55:01.303598592Z" level=info msg="StartContainer for \"56f3ceeeab0295406720aa70b3bb41d24f5cc548504f614b17e4fd2231874bd2\"" Jul 2 08:55:01.323809 systemd[1]: Started cri-containerd-56f3ceeeab0295406720aa70b3bb41d24f5cc548504f614b17e4fd2231874bd2.scope. Jul 2 08:55:01.375812 env[1151]: time="2024-07-02T08:55:01.375751738Z" level=info msg="StartContainer for \"56f3ceeeab0295406720aa70b3bb41d24f5cc548504f614b17e4fd2231874bd2\" returns successfully" Jul 2 08:55:01.430399 systemd[1]: cri-containerd-56f3ceeeab0295406720aa70b3bb41d24f5cc548504f614b17e4fd2231874bd2.scope: Deactivated successfully. Jul 2 08:55:01.464060 env[1151]: time="2024-07-02T08:55:01.463968385Z" level=info msg="shim disconnected" id=56f3ceeeab0295406720aa70b3bb41d24f5cc548504f614b17e4fd2231874bd2 Jul 2 08:55:01.464060 env[1151]: time="2024-07-02T08:55:01.464054998Z" level=warning msg="cleaning up after shim disconnected" id=56f3ceeeab0295406720aa70b3bb41d24f5cc548504f614b17e4fd2231874bd2 namespace=k8s.io Jul 2 08:55:01.464060 env[1151]: time="2024-07-02T08:55:01.464076388Z" level=info msg="cleaning up dead shim" Jul 2 08:55:01.474072 env[1151]: time="2024-07-02T08:55:01.473989906Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:55:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3272 runtime=io.containerd.runc.v2\n" Jul 2 08:55:01.767412 env[1151]: time="2024-07-02T08:55:01.767310869Z" level=info msg="CreateContainer within sandbox \"46cb9d5a2eb4acd6e05a2b131617673b69c246233d53e2a93d98ad93a16f50f3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 08:55:01.813472 env[1151]: time="2024-07-02T08:55:01.813378360Z" level=info msg="CreateContainer within sandbox \"46cb9d5a2eb4acd6e05a2b131617673b69c246233d53e2a93d98ad93a16f50f3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9c759efa666ca02c23d4535d5cbb73636226f8ddb12ec1bbb1dac2f1a569e1d9\"" Jul 2 08:55:01.816342 env[1151]: time="2024-07-02T08:55:01.816271285Z" level=info msg="StartContainer for \"9c759efa666ca02c23d4535d5cbb73636226f8ddb12ec1bbb1dac2f1a569e1d9\"" Jul 2 08:55:01.855415 systemd[1]: Started cri-containerd-9c759efa666ca02c23d4535d5cbb73636226f8ddb12ec1bbb1dac2f1a569e1d9.scope. Jul 2 08:55:01.909001 env[1151]: time="2024-07-02T08:55:01.908942216Z" level=info msg="StartContainer for \"9c759efa666ca02c23d4535d5cbb73636226f8ddb12ec1bbb1dac2f1a569e1d9\" returns successfully" Jul 2 08:55:01.926504 systemd[1]: cri-containerd-9c759efa666ca02c23d4535d5cbb73636226f8ddb12ec1bbb1dac2f1a569e1d9.scope: Deactivated successfully. Jul 2 08:55:01.952011 env[1151]: time="2024-07-02T08:55:01.951926032Z" level=info msg="shim disconnected" id=9c759efa666ca02c23d4535d5cbb73636226f8ddb12ec1bbb1dac2f1a569e1d9 Jul 2 08:55:01.952011 env[1151]: time="2024-07-02T08:55:01.951993129Z" level=warning msg="cleaning up after shim disconnected" id=9c759efa666ca02c23d4535d5cbb73636226f8ddb12ec1bbb1dac2f1a569e1d9 namespace=k8s.io Jul 2 08:55:01.952011 env[1151]: time="2024-07-02T08:55:01.952005743Z" level=info msg="cleaning up dead shim" Jul 2 08:55:01.960909 env[1151]: time="2024-07-02T08:55:01.960828678Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:55:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3332 runtime=io.containerd.runc.v2\n" Jul 2 08:55:01.983435 kubelet[1418]: E0702 08:55:01.983377 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:02.092975 kubelet[1418]: E0702 08:55:02.091521 1418 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 08:55:02.735481 systemd[1]: run-containerd-runc-k8s.io-9c759efa666ca02c23d4535d5cbb73636226f8ddb12ec1bbb1dac2f1a569e1d9-runc.0tDKcT.mount: Deactivated successfully. Jul 2 08:55:02.735724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c759efa666ca02c23d4535d5cbb73636226f8ddb12ec1bbb1dac2f1a569e1d9-rootfs.mount: Deactivated successfully. Jul 2 08:55:02.772714 env[1151]: time="2024-07-02T08:55:02.772632837Z" level=info msg="CreateContainer within sandbox \"46cb9d5a2eb4acd6e05a2b131617673b69c246233d53e2a93d98ad93a16f50f3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 08:55:02.811400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1413200256.mount: Deactivated successfully. Jul 2 08:55:02.831765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3231702040.mount: Deactivated successfully. Jul 2 08:55:02.835897 env[1151]: time="2024-07-02T08:55:02.835759324Z" level=info msg="CreateContainer within sandbox \"46cb9d5a2eb4acd6e05a2b131617673b69c246233d53e2a93d98ad93a16f50f3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6ebf9c2ac1abadec06e7310d5281f2fe945be2d67d900f154cdb9cf2e4433069\"" Jul 2 08:55:02.838045 env[1151]: time="2024-07-02T08:55:02.837982547Z" level=info msg="StartContainer for \"6ebf9c2ac1abadec06e7310d5281f2fe945be2d67d900f154cdb9cf2e4433069\"" Jul 2 08:55:02.872485 systemd[1]: Started cri-containerd-6ebf9c2ac1abadec06e7310d5281f2fe945be2d67d900f154cdb9cf2e4433069.scope. Jul 2 08:55:02.928593 env[1151]: time="2024-07-02T08:55:02.928468041Z" level=info msg="StartContainer for \"6ebf9c2ac1abadec06e7310d5281f2fe945be2d67d900f154cdb9cf2e4433069\" returns successfully" Jul 2 08:55:02.942436 systemd[1]: cri-containerd-6ebf9c2ac1abadec06e7310d5281f2fe945be2d67d900f154cdb9cf2e4433069.scope: Deactivated successfully. Jul 2 08:55:02.972596 env[1151]: time="2024-07-02T08:55:02.972504954Z" level=info msg="shim disconnected" id=6ebf9c2ac1abadec06e7310d5281f2fe945be2d67d900f154cdb9cf2e4433069 Jul 2 08:55:02.973143 env[1151]: time="2024-07-02T08:55:02.973098691Z" level=warning msg="cleaning up after shim disconnected" id=6ebf9c2ac1abadec06e7310d5281f2fe945be2d67d900f154cdb9cf2e4433069 namespace=k8s.io Jul 2 08:55:02.973366 env[1151]: time="2024-07-02T08:55:02.973329645Z" level=info msg="cleaning up dead shim" Jul 2 08:55:02.984571 kubelet[1418]: E0702 08:55:02.984477 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:02.986603 env[1151]: time="2024-07-02T08:55:02.986534887Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:55:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3388 runtime=io.containerd.runc.v2\n" Jul 2 08:55:03.386654 kubelet[1418]: W0702 08:55:03.386589 1418 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e2fb3ed_eaae_4ad0_ad22_9428fe918e10.slice/cri-containerd-07fb1acbb75050370741b4a3c8019a41af1ecce49c5763dc41a3380cda9a4357.scope WatchSource:0}: container "07fb1acbb75050370741b4a3c8019a41af1ecce49c5763dc41a3380cda9a4357" in namespace "k8s.io": not found Jul 2 08:55:03.783675 env[1151]: time="2024-07-02T08:55:03.783579134Z" level=info msg="CreateContainer within sandbox \"46cb9d5a2eb4acd6e05a2b131617673b69c246233d53e2a93d98ad93a16f50f3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 08:55:03.830700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3497881070.mount: Deactivated successfully. Jul 2 08:55:03.843426 env[1151]: time="2024-07-02T08:55:03.843337353Z" level=info msg="CreateContainer within sandbox \"46cb9d5a2eb4acd6e05a2b131617673b69c246233d53e2a93d98ad93a16f50f3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"432cfc0597406b74375aa8496be3563414997a605183e94f91c4e510a20ca0ea\"" Jul 2 08:55:03.845699 env[1151]: time="2024-07-02T08:55:03.845574964Z" level=info msg="StartContainer for \"432cfc0597406b74375aa8496be3563414997a605183e94f91c4e510a20ca0ea\"" Jul 2 08:55:03.899321 systemd[1]: Started cri-containerd-432cfc0597406b74375aa8496be3563414997a605183e94f91c4e510a20ca0ea.scope. Jul 2 08:55:03.931802 systemd[1]: cri-containerd-432cfc0597406b74375aa8496be3563414997a605183e94f91c4e510a20ca0ea.scope: Deactivated successfully. Jul 2 08:55:03.934097 env[1151]: time="2024-07-02T08:55:03.933949632Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf84e14b6_a4de_4291_a9dd_10ebdeae3003.slice/cri-containerd-432cfc0597406b74375aa8496be3563414997a605183e94f91c4e510a20ca0ea.scope/memory.events\": no such file or directory" Jul 2 08:55:03.938363 env[1151]: time="2024-07-02T08:55:03.938293305Z" level=info msg="StartContainer for \"432cfc0597406b74375aa8496be3563414997a605183e94f91c4e510a20ca0ea\" returns successfully" Jul 2 08:55:03.972470 env[1151]: time="2024-07-02T08:55:03.972411867Z" level=info msg="shim disconnected" id=432cfc0597406b74375aa8496be3563414997a605183e94f91c4e510a20ca0ea Jul 2 08:55:03.972757 env[1151]: time="2024-07-02T08:55:03.972735095Z" level=warning msg="cleaning up after shim disconnected" id=432cfc0597406b74375aa8496be3563414997a605183e94f91c4e510a20ca0ea namespace=k8s.io Jul 2 08:55:03.972827 env[1151]: time="2024-07-02T08:55:03.972813071Z" level=info msg="cleaning up dead shim" Jul 2 08:55:03.981899 env[1151]: time="2024-07-02T08:55:03.981817877Z" level=warning msg="cleanup warnings time=\"2024-07-02T08:55:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3447 runtime=io.containerd.runc.v2\n" Jul 2 08:55:03.985644 kubelet[1418]: E0702 08:55:03.985570 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:04.736261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-432cfc0597406b74375aa8496be3563414997a605183e94f91c4e510a20ca0ea-rootfs.mount: Deactivated successfully. Jul 2 08:55:04.791370 env[1151]: time="2024-07-02T08:55:04.791282167Z" level=info msg="CreateContainer within sandbox \"46cb9d5a2eb4acd6e05a2b131617673b69c246233d53e2a93d98ad93a16f50f3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 08:55:04.827300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4173750019.mount: Deactivated successfully. Jul 2 08:55:04.834953 env[1151]: time="2024-07-02T08:55:04.834813834Z" level=info msg="CreateContainer within sandbox \"46cb9d5a2eb4acd6e05a2b131617673b69c246233d53e2a93d98ad93a16f50f3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"73d486599692c762ddae0d6ab2198871ebe073677de397df6b5325dde5504600\"" Jul 2 08:55:04.836532 env[1151]: time="2024-07-02T08:55:04.836469429Z" level=info msg="StartContainer for \"73d486599692c762ddae0d6ab2198871ebe073677de397df6b5325dde5504600\"" Jul 2 08:55:04.883618 systemd[1]: Started cri-containerd-73d486599692c762ddae0d6ab2198871ebe073677de397df6b5325dde5504600.scope. Jul 2 08:55:04.941787 env[1151]: time="2024-07-02T08:55:04.941695010Z" level=info msg="StartContainer for \"73d486599692c762ddae0d6ab2198871ebe073677de397df6b5325dde5504600\" returns successfully" Jul 2 08:55:04.985805 kubelet[1418]: E0702 08:55:04.985729 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:05.784924 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 08:55:05.836888 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jul 2 08:55:05.847640 kubelet[1418]: I0702 08:55:05.847563 1418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-nxrt7" podStartSLOduration=5.847508313 podStartE2EDuration="5.847508313s" podCreationTimestamp="2024-07-02 08:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:55:05.838258359 +0000 UTC m=+90.354818801" watchObservedRunningTime="2024-07-02 08:55:05.847508313 +0000 UTC m=+90.364068704" Jul 2 08:55:05.986107 kubelet[1418]: E0702 08:55:05.986030 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:06.515898 kubelet[1418]: W0702 08:55:06.515726 1418 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf84e14b6_a4de_4291_a9dd_10ebdeae3003.slice/cri-containerd-56f3ceeeab0295406720aa70b3bb41d24f5cc548504f614b17e4fd2231874bd2.scope WatchSource:0}: task 56f3ceeeab0295406720aa70b3bb41d24f5cc548504f614b17e4fd2231874bd2 not found: not found Jul 2 08:55:06.987725 kubelet[1418]: E0702 08:55:06.987628 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:07.697823 systemd[1]: run-containerd-runc-k8s.io-73d486599692c762ddae0d6ab2198871ebe073677de397df6b5325dde5504600-runc.CmFOM6.mount: Deactivated successfully. Jul 2 08:55:07.989149 kubelet[1418]: E0702 08:55:07.989052 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:08.990200 kubelet[1418]: E0702 08:55:08.990116 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:09.039206 systemd-networkd[970]: lxc_health: Link UP Jul 2 08:55:09.045902 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 08:55:09.043697 systemd-networkd[970]: lxc_health: Gained carrier Jul 2 08:55:09.626867 kubelet[1418]: W0702 08:55:09.625766 1418 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf84e14b6_a4de_4291_a9dd_10ebdeae3003.slice/cri-containerd-9c759efa666ca02c23d4535d5cbb73636226f8ddb12ec1bbb1dac2f1a569e1d9.scope WatchSource:0}: task 9c759efa666ca02c23d4535d5cbb73636226f8ddb12ec1bbb1dac2f1a569e1d9 not found: not found Jul 2 08:55:09.982301 systemd[1]: run-containerd-runc-k8s.io-73d486599692c762ddae0d6ab2198871ebe073677de397df6b5325dde5504600-runc.EpDS5M.mount: Deactivated successfully. Jul 2 08:55:09.991664 kubelet[1418]: E0702 08:55:09.991561 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:10.992493 kubelet[1418]: E0702 08:55:10.992378 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:11.057141 systemd-networkd[970]: lxc_health: Gained IPv6LL Jul 2 08:55:11.994782 kubelet[1418]: E0702 08:55:11.994040 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:12.176057 systemd[1]: run-containerd-runc-k8s.io-73d486599692c762ddae0d6ab2198871ebe073677de397df6b5325dde5504600-runc.VKrLT2.mount: Deactivated successfully. Jul 2 08:55:12.746414 kubelet[1418]: W0702 08:55:12.746351 1418 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf84e14b6_a4de_4291_a9dd_10ebdeae3003.slice/cri-containerd-6ebf9c2ac1abadec06e7310d5281f2fe945be2d67d900f154cdb9cf2e4433069.scope WatchSource:0}: task 6ebf9c2ac1abadec06e7310d5281f2fe945be2d67d900f154cdb9cf2e4433069 not found: not found Jul 2 08:55:12.994396 kubelet[1418]: E0702 08:55:12.994339 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:13.995127 kubelet[1418]: E0702 08:55:13.995024 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:14.373138 systemd[1]: run-containerd-runc-k8s.io-73d486599692c762ddae0d6ab2198871ebe073677de397df6b5325dde5504600-runc.44f186.mount: Deactivated successfully. Jul 2 08:55:14.995582 kubelet[1418]: E0702 08:55:14.995482 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:15.854720 kubelet[1418]: W0702 08:55:15.854576 1418 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf84e14b6_a4de_4291_a9dd_10ebdeae3003.slice/cri-containerd-432cfc0597406b74375aa8496be3563414997a605183e94f91c4e510a20ca0ea.scope WatchSource:0}: task 432cfc0597406b74375aa8496be3563414997a605183e94f91c4e510a20ca0ea not found: not found Jul 2 08:55:15.997312 kubelet[1418]: E0702 08:55:15.997169 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:16.894962 kubelet[1418]: E0702 08:55:16.894823 1418 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:16.997525 kubelet[1418]: E0702 08:55:16.997411 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:17.997819 kubelet[1418]: E0702 08:55:17.997712 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:18.998725 kubelet[1418]: E0702 08:55:18.998649 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 08:55:19.999782 kubelet[1418]: E0702 08:55:19.999736 1418 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"