Oct 2 20:32:33.020866 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 20:32:33.020887 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 20:32:33.020900 kernel: BIOS-provided physical RAM map: Oct 2 20:32:33.020907 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 2 20:32:33.020913 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 2 20:32:33.020920 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 2 20:32:33.020928 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Oct 2 20:32:33.020935 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Oct 2 20:32:33.020943 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 2 20:32:33.020949 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 2 20:32:33.020956 kernel: NX (Execute Disable) protection: active Oct 2 20:32:33.020962 kernel: SMBIOS 2.8 present. Oct 2 20:32:33.020969 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Oct 2 20:32:33.020976 kernel: Hypervisor detected: KVM Oct 2 20:32:33.020984 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 2 20:32:33.020993 kernel: kvm-clock: cpu 0, msr 2bf8a001, primary cpu clock Oct 2 20:32:33.021000 kernel: kvm-clock: using sched offset of 9070517641 cycles Oct 2 20:32:33.021008 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 2 20:32:33.021015 kernel: tsc: Detected 1996.249 MHz processor Oct 2 20:32:33.021023 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 20:32:33.021031 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 20:32:33.021038 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Oct 2 20:32:33.021045 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 20:32:33.021055 kernel: ACPI: Early table checksum verification disabled Oct 2 20:32:33.021062 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Oct 2 20:32:33.021070 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 20:32:33.021077 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 20:32:33.021085 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 20:32:33.021092 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 2 20:32:33.021099 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 20:32:33.021107 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 20:32:33.021114 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Oct 2 20:32:33.021123 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Oct 2 20:32:33.021131 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 2 20:32:33.021138 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Oct 2 20:32:33.021145 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Oct 2 20:32:33.021152 kernel: No NUMA configuration found Oct 2 20:32:33.021159 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Oct 2 20:32:33.021167 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Oct 2 20:32:33.021174 kernel: Zone ranges: Oct 2 20:32:33.021186 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 20:32:33.021194 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Oct 2 20:32:33.021201 kernel: Normal empty Oct 2 20:32:33.021209 kernel: Movable zone start for each node Oct 2 20:32:33.021217 kernel: Early memory node ranges Oct 2 20:32:33.021224 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 2 20:32:33.021233 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Oct 2 20:32:33.021241 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Oct 2 20:32:33.021248 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 20:32:33.021256 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 2 20:32:33.021263 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Oct 2 20:32:33.021271 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 2 20:32:33.021278 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 2 20:32:33.021286 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 2 20:32:33.021294 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 2 20:32:33.021301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 2 20:32:33.021310 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 20:32:33.021318 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 2 20:32:33.021387 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 2 20:32:33.021395 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 20:32:33.021403 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 2 20:32:33.021411 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 2 20:32:33.021418 kernel: Booting paravirtualized kernel on KVM Oct 2 20:32:33.021426 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 20:32:33.021434 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Oct 2 20:32:33.021445 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Oct 2 20:32:33.021453 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Oct 2 20:32:33.021460 kernel: pcpu-alloc: [0] 0 1 Oct 2 20:32:33.021468 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Oct 2 20:32:33.021475 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 2 20:32:33.021483 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Oct 2 20:32:33.021490 kernel: Policy zone: DMA32 Oct 2 20:32:33.021500 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 20:32:33.021512 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 20:32:33.021519 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 20:32:33.021527 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 2 20:32:33.021535 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 20:32:33.021543 kernel: Memory: 1975340K/2096620K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 121020K reserved, 0K cma-reserved) Oct 2 20:32:33.021550 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 2 20:32:33.021558 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 20:32:33.021565 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 20:32:33.021576 kernel: rcu: Hierarchical RCU implementation. Oct 2 20:32:33.021584 kernel: rcu: RCU event tracing is enabled. Oct 2 20:32:33.021592 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 2 20:32:33.021600 kernel: Rude variant of Tasks RCU enabled. Oct 2 20:32:33.021608 kernel: Tracing variant of Tasks RCU enabled. Oct 2 20:32:33.021616 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 20:32:33.021623 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 2 20:32:33.021631 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 2 20:32:33.021639 kernel: Console: colour VGA+ 80x25 Oct 2 20:32:33.021648 kernel: printk: console [tty0] enabled Oct 2 20:32:33.021656 kernel: printk: console [ttyS0] enabled Oct 2 20:32:33.021664 kernel: ACPI: Core revision 20210730 Oct 2 20:32:33.021671 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 20:32:33.021679 kernel: x2apic enabled Oct 2 20:32:33.021687 kernel: Switched APIC routing to physical x2apic. Oct 2 20:32:33.021695 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 2 20:32:33.021702 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 2 20:32:33.021710 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Oct 2 20:32:33.021718 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 2 20:32:33.021728 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 2 20:32:33.021735 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 20:32:33.021743 kernel: Spectre V2 : Mitigation: Retpolines Oct 2 20:32:33.021751 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 20:32:33.021758 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 2 20:32:33.021766 kernel: Speculative Store Bypass: Vulnerable Oct 2 20:32:33.021774 kernel: x86/fpu: x87 FPU will use FXSAVE Oct 2 20:32:33.021782 kernel: Freeing SMP alternatives memory: 32K Oct 2 20:32:33.021789 kernel: pid_max: default: 32768 minimum: 301 Oct 2 20:32:33.021800 kernel: LSM: Security Framework initializing Oct 2 20:32:33.021807 kernel: SELinux: Initializing. Oct 2 20:32:33.021815 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 2 20:32:33.021823 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 2 20:32:33.021831 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Oct 2 20:32:33.021839 kernel: Performance Events: AMD PMU driver. Oct 2 20:32:33.021846 kernel: ... version: 0 Oct 2 20:32:33.021854 kernel: ... bit width: 48 Oct 2 20:32:33.021862 kernel: ... generic registers: 4 Oct 2 20:32:33.021880 kernel: ... value mask: 0000ffffffffffff Oct 2 20:32:33.021888 kernel: ... max period: 00007fffffffffff Oct 2 20:32:33.021896 kernel: ... fixed-purpose events: 0 Oct 2 20:32:33.021906 kernel: ... event mask: 000000000000000f Oct 2 20:32:33.021914 kernel: signal: max sigframe size: 1440 Oct 2 20:32:33.021921 kernel: rcu: Hierarchical SRCU implementation. Oct 2 20:32:33.021930 kernel: smp: Bringing up secondary CPUs ... Oct 2 20:32:33.021938 kernel: x86: Booting SMP configuration: Oct 2 20:32:33.021948 kernel: .... node #0, CPUs: #1 Oct 2 20:32:33.021956 kernel: kvm-clock: cpu 1, msr 2bf8a041, secondary cpu clock Oct 2 20:32:33.021964 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Oct 2 20:32:33.021972 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 20:32:33.021980 kernel: smpboot: Max logical packages: 2 Oct 2 20:32:33.021988 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Oct 2 20:32:33.021996 kernel: devtmpfs: initialized Oct 2 20:32:33.022004 kernel: x86/mm: Memory block size: 128MB Oct 2 20:32:33.022012 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 20:32:33.022022 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 2 20:32:33.022030 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 20:32:33.022038 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 20:32:33.022046 kernel: audit: initializing netlink subsys (disabled) Oct 2 20:32:33.022055 kernel: audit: type=2000 audit(1696278752.687:1): state=initialized audit_enabled=0 res=1 Oct 2 20:32:33.022062 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 20:32:33.022070 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 20:32:33.022078 kernel: cpuidle: using governor menu Oct 2 20:32:33.022086 kernel: ACPI: bus type PCI registered Oct 2 20:32:33.022096 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 20:32:33.022104 kernel: dca service started, version 1.12.1 Oct 2 20:32:33.022112 kernel: PCI: Using configuration type 1 for base access Oct 2 20:32:33.022120 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 20:32:33.022128 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 20:32:33.022136 kernel: ACPI: Added _OSI(Module Device) Oct 2 20:32:33.022144 kernel: ACPI: Added _OSI(Processor Device) Oct 2 20:32:33.022152 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 20:32:33.022160 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 20:32:33.022169 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 20:32:33.022177 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 20:32:33.022185 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 20:32:33.022193 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 20:32:33.022201 kernel: ACPI: Interpreter enabled Oct 2 20:32:33.022209 kernel: ACPI: PM: (supports S0 S3 S5) Oct 2 20:32:33.022217 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 20:32:33.022225 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 20:32:33.022233 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 2 20:32:33.022243 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 20:32:33.024432 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 2 20:32:33.024531 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Oct 2 20:32:33.024546 kernel: acpiphp: Slot [3] registered Oct 2 20:32:33.024555 kernel: acpiphp: Slot [4] registered Oct 2 20:32:33.024563 kernel: acpiphp: Slot [5] registered Oct 2 20:32:33.024572 kernel: acpiphp: Slot [6] registered Oct 2 20:32:33.024584 kernel: acpiphp: Slot [7] registered Oct 2 20:32:33.024593 kernel: acpiphp: Slot [8] registered Oct 2 20:32:33.024601 kernel: acpiphp: Slot [9] registered Oct 2 20:32:33.024610 kernel: acpiphp: Slot [10] registered Oct 2 20:32:33.024618 kernel: acpiphp: Slot [11] registered Oct 2 20:32:33.024626 kernel: acpiphp: Slot [12] registered Oct 2 20:32:33.024635 kernel: acpiphp: Slot [13] registered Oct 2 20:32:33.024643 kernel: acpiphp: Slot [14] registered Oct 2 20:32:33.024652 kernel: acpiphp: Slot [15] registered Oct 2 20:32:33.024660 kernel: acpiphp: Slot [16] registered Oct 2 20:32:33.024671 kernel: acpiphp: Slot [17] registered Oct 2 20:32:33.024681 kernel: acpiphp: Slot [18] registered Oct 2 20:32:33.024690 kernel: acpiphp: Slot [19] registered Oct 2 20:32:33.024698 kernel: acpiphp: Slot [20] registered Oct 2 20:32:33.024706 kernel: acpiphp: Slot [21] registered Oct 2 20:32:33.024714 kernel: acpiphp: Slot [22] registered Oct 2 20:32:33.024722 kernel: acpiphp: Slot [23] registered Oct 2 20:32:33.024729 kernel: acpiphp: Slot [24] registered Oct 2 20:32:33.024737 kernel: acpiphp: Slot [25] registered Oct 2 20:32:33.024749 kernel: acpiphp: Slot [26] registered Oct 2 20:32:33.024757 kernel: acpiphp: Slot [27] registered Oct 2 20:32:33.024765 kernel: acpiphp: Slot [28] registered Oct 2 20:32:33.024773 kernel: acpiphp: Slot [29] registered Oct 2 20:32:33.024781 kernel: acpiphp: Slot [30] registered Oct 2 20:32:33.024789 kernel: acpiphp: Slot [31] registered Oct 2 20:32:33.024797 kernel: PCI host bridge to bus 0000:00 Oct 2 20:32:33.024910 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 20:32:33.024986 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 2 20:32:33.025063 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 20:32:33.025136 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 2 20:32:33.025208 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 2 20:32:33.025280 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 20:32:33.025412 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 2 20:32:33.025516 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 2 20:32:33.025617 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 2 20:32:33.025702 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Oct 2 20:32:33.025787 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 2 20:32:33.025871 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 2 20:32:33.025954 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 2 20:32:33.026036 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 2 20:32:33.026125 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 2 20:32:33.026224 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 2 20:32:33.026311 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 2 20:32:33.026431 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Oct 2 20:32:33.026518 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Oct 2 20:32:33.026606 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Oct 2 20:32:33.026689 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Oct 2 20:32:33.026777 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Oct 2 20:32:33.026862 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 2 20:32:33.026962 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 2 20:32:33.027047 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Oct 2 20:32:33.027131 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Oct 2 20:32:33.027215 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Oct 2 20:32:33.027299 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Oct 2 20:32:33.027485 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 2 20:32:33.027587 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 2 20:32:33.027685 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Oct 2 20:32:33.027789 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 2 20:32:33.027898 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Oct 2 20:32:33.028005 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Oct 2 20:32:33.028103 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 2 20:32:33.028217 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 20:32:33.028314 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Oct 2 20:32:33.034614 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Oct 2 20:32:33.034631 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 2 20:32:33.034641 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 2 20:32:33.034650 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 2 20:32:33.034660 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 2 20:32:33.034669 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 2 20:32:33.034688 kernel: iommu: Default domain type: Translated Oct 2 20:32:33.034697 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 20:32:33.034804 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 2 20:32:33.034898 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 2 20:32:33.034988 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 2 20:32:33.035002 kernel: vgaarb: loaded Oct 2 20:32:33.035011 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 20:32:33.035020 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 20:32:33.035029 kernel: PTP clock support registered Oct 2 20:32:33.035042 kernel: PCI: Using ACPI for IRQ routing Oct 2 20:32:33.035051 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 20:32:33.035060 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 2 20:32:33.035069 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Oct 2 20:32:33.035078 kernel: clocksource: Switched to clocksource kvm-clock Oct 2 20:32:33.035087 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 20:32:33.035096 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 20:32:33.035105 kernel: pnp: PnP ACPI init Oct 2 20:32:33.035211 kernel: pnp 00:03: [dma 2] Oct 2 20:32:33.035237 kernel: pnp: PnP ACPI: found 5 devices Oct 2 20:32:33.035246 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 20:32:33.035255 kernel: NET: Registered PF_INET protocol family Oct 2 20:32:33.035264 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 20:32:33.035273 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 2 20:32:33.035282 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 20:32:33.035291 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 2 20:32:33.035300 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Oct 2 20:32:33.035311 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 2 20:32:33.035334 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 2 20:32:33.035343 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 2 20:32:33.035352 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 20:32:33.035361 kernel: NET: Registered PF_XDP protocol family Oct 2 20:32:33.035482 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 2 20:32:33.035567 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 2 20:32:33.035646 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 2 20:32:33.035724 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 2 20:32:33.035799 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 2 20:32:33.035898 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 2 20:32:33.035986 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 2 20:32:33.036068 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Oct 2 20:32:33.036080 kernel: PCI: CLS 0 bytes, default 64 Oct 2 20:32:33.036089 kernel: Initialise system trusted keyrings Oct 2 20:32:33.036097 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 2 20:32:33.036106 kernel: Key type asymmetric registered Oct 2 20:32:33.036118 kernel: Asymmetric key parser 'x509' registered Oct 2 20:32:33.036126 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 20:32:33.036135 kernel: io scheduler mq-deadline registered Oct 2 20:32:33.036143 kernel: io scheduler kyber registered Oct 2 20:32:33.036151 kernel: io scheduler bfq registered Oct 2 20:32:33.036159 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 20:32:33.036168 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 2 20:32:33.036176 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 2 20:32:33.036184 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 2 20:32:33.036196 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 2 20:32:33.036204 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 20:32:33.036213 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 20:32:33.036221 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 2 20:32:33.036229 kernel: random: crng init done Oct 2 20:32:33.036237 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 2 20:32:33.036245 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 2 20:32:33.036253 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 2 20:32:33.036360 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 2 20:32:33.036447 kernel: rtc_cmos 00:04: registered as rtc0 Oct 2 20:32:33.036522 kernel: rtc_cmos 00:04: setting system clock to 2023-10-02T20:32:32 UTC (1696278752) Oct 2 20:32:33.036595 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 2 20:32:33.036607 kernel: NET: Registered PF_INET6 protocol family Oct 2 20:32:33.036616 kernel: Segment Routing with IPv6 Oct 2 20:32:33.036624 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 20:32:33.036632 kernel: NET: Registered PF_PACKET protocol family Oct 2 20:32:33.036640 kernel: Key type dns_resolver registered Oct 2 20:32:33.036652 kernel: IPI shorthand broadcast: enabled Oct 2 20:32:33.036660 kernel: sched_clock: Marking stable (727572077, 118880322)->(909900448, -63448049) Oct 2 20:32:33.036668 kernel: registered taskstats version 1 Oct 2 20:32:33.036676 kernel: Loading compiled-in X.509 certificates Oct 2 20:32:33.036685 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 20:32:33.036693 kernel: Key type .fscrypt registered Oct 2 20:32:33.036701 kernel: Key type fscrypt-provisioning registered Oct 2 20:32:33.036709 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 20:32:33.036725 kernel: ima: Allocated hash algorithm: sha1 Oct 2 20:32:33.036734 kernel: ima: No architecture policies found Oct 2 20:32:33.036742 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 20:32:33.036750 kernel: Write protecting the kernel read-only data: 28672k Oct 2 20:32:33.036758 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 20:32:33.036766 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 20:32:33.036775 kernel: Run /init as init process Oct 2 20:32:33.036783 kernel: with arguments: Oct 2 20:32:33.036790 kernel: /init Oct 2 20:32:33.036799 kernel: with environment: Oct 2 20:32:33.036808 kernel: HOME=/ Oct 2 20:32:33.036817 kernel: TERM=linux Oct 2 20:32:33.036825 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 20:32:33.036836 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 20:32:33.036847 systemd[1]: Detected virtualization kvm. Oct 2 20:32:33.036856 systemd[1]: Detected architecture x86-64. Oct 2 20:32:33.036865 systemd[1]: Running in initrd. Oct 2 20:32:33.036875 systemd[1]: No hostname configured, using default hostname. Oct 2 20:32:33.036884 systemd[1]: Hostname set to . Oct 2 20:32:33.036893 systemd[1]: Initializing machine ID from VM UUID. Oct 2 20:32:33.036902 systemd[1]: Queued start job for default target initrd.target. Oct 2 20:32:33.036911 systemd[1]: Started systemd-ask-password-console.path. Oct 2 20:32:33.036920 systemd[1]: Reached target cryptsetup.target. Oct 2 20:32:33.036929 systemd[1]: Reached target paths.target. Oct 2 20:32:33.036938 systemd[1]: Reached target slices.target. Oct 2 20:32:33.036949 systemd[1]: Reached target swap.target. Oct 2 20:32:33.036957 systemd[1]: Reached target timers.target. Oct 2 20:32:33.036967 systemd[1]: Listening on iscsid.socket. Oct 2 20:32:33.036976 systemd[1]: Listening on iscsiuio.socket. Oct 2 20:32:33.036985 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 20:32:33.036994 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 20:32:33.037003 systemd[1]: Listening on systemd-journald.socket. Oct 2 20:32:33.037012 systemd[1]: Listening on systemd-networkd.socket. Oct 2 20:32:33.037022 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 20:32:33.037031 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 20:32:33.037040 systemd[1]: Reached target sockets.target. Oct 2 20:32:33.037049 systemd[1]: Starting kmod-static-nodes.service... Oct 2 20:32:33.037069 systemd[1]: Finished network-cleanup.service. Oct 2 20:32:33.037080 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 20:32:33.037091 systemd[1]: Starting systemd-journald.service... Oct 2 20:32:33.037100 systemd[1]: Starting systemd-modules-load.service... Oct 2 20:32:33.037109 systemd[1]: Starting systemd-resolved.service... Oct 2 20:32:33.037118 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 20:32:33.037127 systemd[1]: Finished kmod-static-nodes.service. Oct 2 20:32:33.037142 systemd-journald[184]: Journal started Oct 2 20:32:33.037203 systemd-journald[184]: Runtime Journal (/run/log/journal/458131273c994b8ca43fdbbefc125c93) is 4.9M, max 39.5M, 34.5M free. Oct 2 20:32:32.997370 systemd-modules-load[185]: Inserted module 'overlay' Oct 2 20:32:33.057157 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 20:32:33.057186 kernel: Bridge firewalling registered Oct 2 20:32:33.047237 systemd-resolved[186]: Positive Trust Anchors: Oct 2 20:32:33.062594 systemd[1]: Started systemd-journald.service. Oct 2 20:32:33.062617 kernel: audit: type=1130 audit(1696278753.057:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.047248 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 20:32:33.069582 kernel: audit: type=1130 audit(1696278753.062:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.047287 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 20:32:33.075603 kernel: audit: type=1130 audit(1696278753.069:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.050175 systemd-resolved[186]: Defaulting to hostname 'linux'. Oct 2 20:32:33.079880 kernel: audit: type=1130 audit(1696278753.075:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.051060 systemd-modules-load[185]: Inserted module 'br_netfilter' Oct 2 20:32:33.085255 kernel: audit: type=1130 audit(1696278753.080:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.085273 kernel: SCSI subsystem initialized Oct 2 20:32:33.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.063110 systemd[1]: Started systemd-resolved.service. Oct 2 20:32:33.070487 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 20:32:33.076171 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 20:32:33.080538 systemd[1]: Reached target nss-lookup.target. Oct 2 20:32:33.086558 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 20:32:33.087972 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 20:32:33.097175 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 20:32:33.115636 kernel: audit: type=1130 audit(1696278753.103:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.115665 kernel: audit: type=1130 audit(1696278753.107:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.115678 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 20:32:33.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.107504 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 20:32:33.108788 systemd[1]: Starting dracut-cmdline.service... Oct 2 20:32:33.119340 kernel: device-mapper: uevent: version 1.0.3 Oct 2 20:32:33.121480 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 20:32:33.125313 systemd-modules-load[185]: Inserted module 'dm_multipath' Oct 2 20:32:33.126669 dracut-cmdline[201]: dracut-dracut-053 Oct 2 20:32:33.126210 systemd[1]: Finished systemd-modules-load.service. Oct 2 20:32:33.128302 systemd[1]: Starting systemd-sysctl.service... Oct 2 20:32:33.138302 kernel: audit: type=1130 audit(1696278753.127:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.138427 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 20:32:33.141872 systemd[1]: Finished systemd-sysctl.service. Oct 2 20:32:33.146213 kernel: audit: type=1130 audit(1696278753.142:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.197382 kernel: Loading iSCSI transport class v2.0-870. Oct 2 20:32:33.211381 kernel: iscsi: registered transport (tcp) Oct 2 20:32:33.236682 kernel: iscsi: registered transport (qla4xxx) Oct 2 20:32:33.236774 kernel: QLogic iSCSI HBA Driver Oct 2 20:32:33.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.296235 systemd[1]: Finished dracut-cmdline.service. Oct 2 20:32:33.299294 systemd[1]: Starting dracut-pre-udev.service... Oct 2 20:32:33.379526 kernel: raid6: sse2x4 gen() 6854 MB/s Oct 2 20:32:33.396437 kernel: raid6: sse2x4 xor() 3441 MB/s Oct 2 20:32:33.413422 kernel: raid6: sse2x2 gen() 13555 MB/s Oct 2 20:32:33.430439 kernel: raid6: sse2x2 xor() 8295 MB/s Oct 2 20:32:33.447417 kernel: raid6: sse2x1 gen() 10192 MB/s Oct 2 20:32:33.465373 kernel: raid6: sse2x1 xor() 5929 MB/s Oct 2 20:32:33.465473 kernel: raid6: using algorithm sse2x2 gen() 13555 MB/s Oct 2 20:32:33.465502 kernel: raid6: .... xor() 8295 MB/s, rmw enabled Oct 2 20:32:33.466209 kernel: raid6: using ssse3x2 recovery algorithm Oct 2 20:32:33.484380 kernel: xor: measuring software checksum speed Oct 2 20:32:33.486382 kernel: prefetch64-sse : 16985 MB/sec Oct 2 20:32:33.488454 kernel: generic_sse : 16516 MB/sec Oct 2 20:32:33.488512 kernel: xor: using function: prefetch64-sse (16985 MB/sec) Oct 2 20:32:33.614401 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 20:32:33.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.631000 audit: BPF prog-id=7 op=LOAD Oct 2 20:32:33.632000 audit: BPF prog-id=8 op=LOAD Oct 2 20:32:33.630422 systemd[1]: Finished dracut-pre-udev.service. Oct 2 20:32:33.634050 systemd[1]: Starting systemd-udevd.service... Oct 2 20:32:33.649136 systemd-udevd[384]: Using default interface naming scheme 'v252'. Oct 2 20:32:33.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.655026 systemd[1]: Started systemd-udevd.service. Oct 2 20:32:33.662743 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 20:32:33.683955 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Oct 2 20:32:33.750725 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 20:32:33.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.753766 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 20:32:33.828754 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 20:32:33.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:33.889352 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Oct 2 20:32:33.929364 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 2 20:32:33.929440 kernel: GPT:17805311 != 41943039 Oct 2 20:32:33.929454 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 2 20:32:33.929466 kernel: GPT:17805311 != 41943039 Oct 2 20:32:33.929478 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 2 20:32:33.929490 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 20:32:33.932346 kernel: libata version 3.00 loaded. Oct 2 20:32:33.936547 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 2 20:32:33.945355 kernel: scsi host0: ata_piix Oct 2 20:32:33.945607 kernel: scsi host1: ata_piix Oct 2 20:32:33.945723 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Oct 2 20:32:33.945736 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Oct 2 20:32:33.971362 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (436) Oct 2 20:32:33.982740 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 20:32:34.104727 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 20:32:34.122450 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 20:32:34.147931 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 20:32:34.149779 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 20:32:34.155435 systemd[1]: Starting disk-uuid.service... Oct 2 20:32:34.252072 disk-uuid[458]: Primary Header is updated. Oct 2 20:32:34.252072 disk-uuid[458]: Secondary Entries is updated. Oct 2 20:32:34.252072 disk-uuid[458]: Secondary Header is updated. Oct 2 20:32:34.275440 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 20:32:34.287401 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 20:32:35.311393 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 20:32:35.312625 disk-uuid[459]: The operation has completed successfully. Oct 2 20:32:35.427874 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 20:32:35.428171 systemd[1]: Finished disk-uuid.service. Oct 2 20:32:35.431894 systemd[1]: Starting verity-setup.service... Oct 2 20:32:35.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:35.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:35.479412 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 20:32:35.747929 systemd[1]: Found device dev-mapper-usr.device. Oct 2 20:32:35.752862 systemd[1]: Mounting sysusr-usr.mount... Oct 2 20:32:35.759399 systemd[1]: Finished verity-setup.service. Oct 2 20:32:35.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:36.229424 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 20:32:36.230532 systemd[1]: Mounted sysusr-usr.mount. Oct 2 20:32:36.231118 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 20:32:36.231966 systemd[1]: Starting ignition-setup.service... Oct 2 20:32:36.233519 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 20:32:36.252428 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 20:32:36.252511 kernel: BTRFS info (device vda6): using free space tree Oct 2 20:32:36.252539 kernel: BTRFS info (device vda6): has skinny extents Oct 2 20:32:36.286419 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 20:32:36.307368 systemd[1]: Finished ignition-setup.service. Oct 2 20:32:36.308911 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 20:32:36.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:36.373618 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 20:32:36.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:36.374000 audit: BPF prog-id=9 op=LOAD Oct 2 20:32:36.376438 systemd[1]: Starting systemd-networkd.service... Oct 2 20:32:36.403170 systemd-networkd[629]: lo: Link UP Oct 2 20:32:36.403185 systemd-networkd[629]: lo: Gained carrier Oct 2 20:32:36.403850 systemd-networkd[629]: Enumeration completed Oct 2 20:32:36.404210 systemd-networkd[629]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 20:32:36.405712 systemd-networkd[629]: eth0: Link UP Oct 2 20:32:36.405717 systemd-networkd[629]: eth0: Gained carrier Oct 2 20:32:36.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:36.406505 systemd[1]: Started systemd-networkd.service. Oct 2 20:32:36.408038 systemd[1]: Reached target network.target. Oct 2 20:32:36.411027 systemd[1]: Starting iscsiuio.service... Oct 2 20:32:36.419529 systemd-networkd[629]: eth0: DHCPv4 address 172.24.4.121/24, gateway 172.24.4.1 acquired from 172.24.4.1 Oct 2 20:32:36.420493 systemd[1]: Started iscsiuio.service. Oct 2 20:32:36.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:36.423180 systemd[1]: Starting iscsid.service... Oct 2 20:32:36.428347 iscsid[637]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 20:32:36.428347 iscsid[637]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 20:32:36.428347 iscsid[637]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 20:32:36.428347 iscsid[637]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 20:32:36.428347 iscsid[637]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 20:32:36.428347 iscsid[637]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 20:32:36.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:36.436564 systemd[1]: Started iscsid.service. Oct 2 20:32:36.438052 systemd[1]: Starting dracut-initqueue.service... Oct 2 20:32:36.451960 systemd[1]: Finished dracut-initqueue.service. Oct 2 20:32:36.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:36.452567 systemd[1]: Reached target remote-fs-pre.target. Oct 2 20:32:36.453940 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 20:32:36.454950 systemd[1]: Reached target remote-fs.target. Oct 2 20:32:36.456840 systemd[1]: Starting dracut-pre-mount.service... Oct 2 20:32:36.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:36.466882 systemd[1]: Finished dracut-pre-mount.service. Oct 2 20:32:36.656785 ignition[577]: Ignition 2.14.0 Oct 2 20:32:36.658501 ignition[577]: Stage: fetch-offline Oct 2 20:32:36.658694 ignition[577]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:32:36.658745 ignition[577]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 20:32:36.661497 ignition[577]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 20:32:36.661775 ignition[577]: parsed url from cmdline: "" Oct 2 20:32:36.661785 ignition[577]: no config URL provided Oct 2 20:32:36.661801 ignition[577]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 20:32:36.665165 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 20:32:36.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:36.661822 ignition[577]: no config at "/usr/lib/ignition/user.ign" Oct 2 20:32:36.661835 ignition[577]: failed to fetch config: resource requires networking Oct 2 20:32:36.669898 systemd[1]: Starting ignition-fetch.service... Oct 2 20:32:36.662638 ignition[577]: Ignition finished successfully Oct 2 20:32:36.695558 ignition[652]: Ignition 2.14.0 Oct 2 20:32:36.695588 ignition[652]: Stage: fetch Oct 2 20:32:36.695848 ignition[652]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:32:36.695893 ignition[652]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 20:32:36.698099 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 20:32:36.698370 ignition[652]: parsed url from cmdline: "" Oct 2 20:32:36.698381 ignition[652]: no config URL provided Oct 2 20:32:36.698394 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 20:32:36.698416 ignition[652]: no config at "/usr/lib/ignition/user.ign" Oct 2 20:32:36.706803 ignition[652]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Oct 2 20:32:36.706828 ignition[652]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Oct 2 20:32:36.707108 ignition[652]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Oct 2 20:32:36.923680 ignition[652]: GET result: OK Oct 2 20:32:36.923940 ignition[652]: parsing config with SHA512: d1479764d4611f38f0a6ed05250a38117ccaef361b9552ec8b86e1df29f3ead84db20f299217dc25fdcd4b0c629177995e019488bfa257ff4b015235e1f70a9c Oct 2 20:32:37.126760 unknown[652]: fetched base config from "system" Oct 2 20:32:37.128111 unknown[652]: fetched base config from "system" Oct 2 20:32:37.128160 unknown[652]: fetched user config from "openstack" Oct 2 20:32:37.130035 ignition[652]: fetch: fetch complete Oct 2 20:32:37.130530 ignition[652]: fetch: fetch passed Oct 2 20:32:37.131041 ignition[652]: Ignition finished successfully Oct 2 20:32:37.134626 systemd[1]: Finished ignition-fetch.service. Oct 2 20:32:37.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:37.136404 kernel: kauditd_printk_skb: 19 callbacks suppressed Oct 2 20:32:37.136432 kernel: audit: type=1130 audit(1696278757.135:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:37.138927 systemd[1]: Starting ignition-kargs.service... Oct 2 20:32:37.166921 ignition[658]: Ignition 2.14.0 Oct 2 20:32:37.166935 ignition[658]: Stage: kargs Oct 2 20:32:37.167045 ignition[658]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:32:37.167067 ignition[658]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 20:32:37.170361 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 20:32:37.171282 ignition[658]: kargs: kargs passed Oct 2 20:32:37.184681 kernel: audit: type=1130 audit(1696278757.173:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:37.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:37.173313 systemd[1]: Finished ignition-kargs.service. Oct 2 20:32:37.171363 ignition[658]: Ignition finished successfully Oct 2 20:32:37.175117 systemd[1]: Starting ignition-disks.service... Oct 2 20:32:37.185475 ignition[664]: Ignition 2.14.0 Oct 2 20:32:37.185483 ignition[664]: Stage: disks Oct 2 20:32:37.200619 kernel: audit: type=1130 audit(1696278757.190:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:37.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:37.189724 systemd[1]: Finished ignition-disks.service. Oct 2 20:32:37.186351 ignition[664]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:32:37.191385 systemd[1]: Reached target initrd-root-device.target. Oct 2 20:32:37.186378 ignition[664]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 20:32:37.201071 systemd[1]: Reached target local-fs-pre.target. Oct 2 20:32:37.187549 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 20:32:37.201994 systemd[1]: Reached target local-fs.target. Oct 2 20:32:37.188816 ignition[664]: disks: disks passed Oct 2 20:32:37.203149 systemd[1]: Reached target sysinit.target. Oct 2 20:32:37.188884 ignition[664]: Ignition finished successfully Oct 2 20:32:37.204189 systemd[1]: Reached target basic.target. Oct 2 20:32:37.205958 systemd[1]: Starting systemd-fsck-root.service... Oct 2 20:32:37.226127 systemd-fsck[671]: ROOT: clean, 603/1628000 files, 124049/1617920 blocks Oct 2 20:32:37.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:37.236708 systemd[1]: Finished systemd-fsck-root.service. Oct 2 20:32:37.250257 kernel: audit: type=1130 audit(1696278757.236:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:37.238086 systemd[1]: Mounting sysroot.mount... Oct 2 20:32:37.262508 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 20:32:37.263264 systemd[1]: Mounted sysroot.mount. Oct 2 20:32:37.264296 systemd[1]: Reached target initrd-root-fs.target. Oct 2 20:32:37.268755 systemd[1]: Mounting sysroot-usr.mount... Oct 2 20:32:37.270251 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 20:32:37.271533 systemd[1]: Starting flatcar-openstack-hostname.service... Oct 2 20:32:37.277012 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 20:32:37.277401 systemd[1]: Reached target ignition-diskful.target. Oct 2 20:32:37.285789 systemd[1]: Mounted sysroot-usr.mount. Oct 2 20:32:37.293543 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 20:32:37.297691 systemd[1]: Starting initrd-setup-root.service... Oct 2 20:32:37.313367 initrd-setup-root[683]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 20:32:37.328050 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (678) Oct 2 20:32:37.329550 initrd-setup-root[691]: cut: /sysroot/etc/group: No such file or directory Oct 2 20:32:37.334184 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 20:32:37.334205 kernel: BTRFS info (device vda6): using free space tree Oct 2 20:32:37.334217 kernel: BTRFS info (device vda6): has skinny extents Oct 2 20:32:37.339702 initrd-setup-root[715]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 20:32:37.346162 initrd-setup-root[725]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 20:32:37.348222 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 20:32:37.424241 systemd[1]: Finished initrd-setup-root.service. Oct 2 20:32:37.435821 kernel: audit: type=1130 audit(1696278757.424:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:37.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:37.425722 systemd[1]: Starting ignition-mount.service... Oct 2 20:32:37.437549 systemd[1]: Starting sysroot-boot.service... Oct 2 20:32:37.442539 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Oct 2 20:32:37.442650 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Oct 2 20:32:37.465689 ignition[745]: INFO : Ignition 2.14.0 Oct 2 20:32:37.466516 ignition[745]: INFO : Stage: mount Oct 2 20:32:37.472500 ignition[745]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:32:37.473397 ignition[745]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 20:32:37.478420 ignition[745]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 20:32:37.483541 ignition[745]: INFO : mount: mount passed Oct 2 20:32:37.484239 ignition[745]: INFO : Ignition finished successfully Oct 2 20:32:37.485991 systemd[1]: Finished ignition-mount.service. Oct 2 20:32:37.490675 kernel: audit: type=1130 audit(1696278757.486:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:37.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:37.497239 systemd[1]: Finished sysroot-boot.service. Oct 2 20:32:37.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:37.503359 kernel: audit: type=1130 audit(1696278757.498:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:37.523232 coreos-metadata[677]: Oct 02 20:32:37.523 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 2 20:32:37.537640 coreos-metadata[677]: Oct 02 20:32:37.537 INFO Fetch successful Oct 2 20:32:37.538858 coreos-metadata[677]: Oct 02 20:32:37.538 INFO wrote hostname ci-3510-3-0-6-a35538653c.novalocal to /sysroot/etc/hostname Oct 2 20:32:37.543967 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Oct 2 20:32:37.544143 systemd[1]: Finished flatcar-openstack-hostname.service. Oct 2 20:32:37.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:37.547957 systemd[1]: Starting ignition-files.service... Oct 2 20:32:37.562489 kernel: audit: type=1130 audit(1696278757.546:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:37.562530 kernel: audit: type=1131 audit(1696278757.546:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:37.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:37.567116 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 20:32:37.580389 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (754) Oct 2 20:32:37.586371 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 20:32:37.586442 kernel: BTRFS info (device vda6): using free space tree Oct 2 20:32:37.586465 kernel: BTRFS info (device vda6): has skinny extents Oct 2 20:32:37.597147 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 20:32:37.615451 ignition[773]: INFO : Ignition 2.14.0 Oct 2 20:32:37.615451 ignition[773]: INFO : Stage: files Oct 2 20:32:37.617556 ignition[773]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:32:37.617556 ignition[773]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 20:32:37.617556 ignition[773]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 20:32:37.622201 ignition[773]: DEBUG : files: compiled without relabeling support, skipping Oct 2 20:32:37.622201 ignition[773]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 20:32:37.622201 ignition[773]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 20:32:37.628152 ignition[773]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 20:32:37.629600 ignition[773]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 20:32:37.630892 ignition[773]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 20:32:37.630892 ignition[773]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Oct 2 20:32:37.629872 unknown[773]: wrote ssh authorized keys file for user: core Oct 2 20:32:37.635239 ignition[773]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Oct 2 20:32:37.819510 ignition[773]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 20:32:37.873641 systemd-networkd[629]: eth0: Gained IPv6LL Oct 2 20:32:38.177052 ignition[773]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Oct 2 20:32:38.178633 ignition[773]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Oct 2 20:32:38.178633 ignition[773]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Oct 2 20:32:38.178633 ignition[773]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Oct 2 20:32:38.278368 ignition[773]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 20:32:38.480880 ignition[773]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Oct 2 20:32:38.482491 ignition[773]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Oct 2 20:32:38.490684 ignition[773]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 20:32:38.491612 ignition[773]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Oct 2 20:32:38.675498 ignition[773]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 20:32:40.395711 ignition[773]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Oct 2 20:32:40.399553 ignition[773]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 20:32:40.399553 ignition[773]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 20:32:40.399553 ignition[773]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Oct 2 20:32:40.549788 ignition[773]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 20:32:43.417760 ignition[773]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Oct 2 20:32:43.421675 ignition[773]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 20:32:43.421675 ignition[773]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 20:32:43.421675 ignition[773]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 20:32:43.421675 ignition[773]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 20:32:43.421675 ignition[773]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 20:32:43.421675 ignition[773]: INFO : files: op(9): [started] processing unit "coreos-metadata-sshkeys@.service" Oct 2 20:32:43.421675 ignition[773]: INFO : files: op(9): op(a): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Oct 2 20:32:43.421675 ignition[773]: INFO : files: op(9): op(a): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Oct 2 20:32:43.421675 ignition[773]: INFO : files: op(9): [finished] processing unit "coreos-metadata-sshkeys@.service" Oct 2 20:32:43.421675 ignition[773]: INFO : files: op(b): [started] processing unit "coreos-metadata.service" Oct 2 20:32:43.421675 ignition[773]: INFO : files: op(b): op(c): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Oct 2 20:32:43.421675 ignition[773]: INFO : files: op(b): op(c): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Oct 2 20:32:43.421675 ignition[773]: INFO : files: op(b): [finished] processing unit "coreos-metadata.service" Oct 2 20:32:43.421675 ignition[773]: INFO : files: op(d): [started] processing unit "prepare-cni-plugins.service" Oct 2 20:32:43.421675 ignition[773]: INFO : files: op(d): op(e): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 20:32:43.486603 kernel: audit: type=1130 audit(1696278763.444:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.486639 kernel: audit: type=1130 audit(1696278763.468:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.486651 kernel: audit: type=1131 audit(1696278763.468:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.486663 kernel: audit: type=1130 audit(1696278763.481:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.438907 systemd[1]: Finished ignition-files.service. Oct 2 20:32:43.488100 ignition[773]: INFO : files: op(d): op(e): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 20:32:43.488100 ignition[773]: INFO : files: op(d): [finished] processing unit "prepare-cni-plugins.service" Oct 2 20:32:43.488100 ignition[773]: INFO : files: op(f): [started] processing unit "prepare-critools.service" Oct 2 20:32:43.488100 ignition[773]: INFO : files: op(f): op(10): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 20:32:43.488100 ignition[773]: INFO : files: op(f): op(10): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 20:32:43.488100 ignition[773]: INFO : files: op(f): [finished] processing unit "prepare-critools.service" Oct 2 20:32:43.488100 ignition[773]: INFO : files: op(11): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 20:32:43.488100 ignition[773]: INFO : files: op(11): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 20:32:43.488100 ignition[773]: INFO : files: op(12): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 20:32:43.488100 ignition[773]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 20:32:43.488100 ignition[773]: INFO : files: op(13): [started] setting preset to enabled for "prepare-critools.service" Oct 2 20:32:43.488100 ignition[773]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 20:32:43.488100 ignition[773]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 20:32:43.488100 ignition[773]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 20:32:43.488100 ignition[773]: INFO : files: files passed Oct 2 20:32:43.488100 ignition[773]: INFO : Ignition finished successfully Oct 2 20:32:43.519772 kernel: audit: type=1130 audit(1696278763.505:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.519798 kernel: audit: type=1131 audit(1696278763.505:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.447718 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 20:32:43.460644 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 20:32:43.521439 initrd-setup-root-after-ignition[798]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 20:32:43.462851 systemd[1]: Starting ignition-quench.service... Oct 2 20:32:43.468666 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 20:32:43.468753 systemd[1]: Finished ignition-quench.service. Oct 2 20:32:43.472604 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 20:32:43.482200 systemd[1]: Reached target ignition-complete.target. Oct 2 20:32:43.489212 systemd[1]: Starting initrd-parse-etc.service... Oct 2 20:32:43.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.504220 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 20:32:43.531878 kernel: audit: type=1130 audit(1696278763.526:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.504356 systemd[1]: Finished initrd-parse-etc.service. Oct 2 20:32:43.506399 systemd[1]: Reached target initrd-fs.target. Oct 2 20:32:43.514166 systemd[1]: Reached target initrd.target. Oct 2 20:32:43.515316 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 20:32:43.516110 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 20:32:43.526686 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 20:32:43.531452 systemd[1]: Starting initrd-cleanup.service... Oct 2 20:32:43.541622 systemd[1]: Stopped target nss-lookup.target. Oct 2 20:32:43.542654 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 20:32:43.547471 systemd[1]: Stopped target timers.target. Oct 2 20:32:43.548052 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 20:32:43.552759 kernel: audit: type=1131 audit(1696278763.548:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.548186 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 20:32:43.549069 systemd[1]: Stopped target initrd.target. Oct 2 20:32:43.553241 systemd[1]: Stopped target basic.target. Oct 2 20:32:43.554091 systemd[1]: Stopped target ignition-complete.target. Oct 2 20:32:43.555043 systemd[1]: Stopped target ignition-diskful.target. Oct 2 20:32:43.555950 systemd[1]: Stopped target initrd-root-device.target. Oct 2 20:32:43.556820 systemd[1]: Stopped target remote-fs.target. Oct 2 20:32:43.557706 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 20:32:43.558576 systemd[1]: Stopped target sysinit.target. Oct 2 20:32:43.559389 systemd[1]: Stopped target local-fs.target. Oct 2 20:32:43.560198 systemd[1]: Stopped target local-fs-pre.target. Oct 2 20:32:43.561016 systemd[1]: Stopped target swap.target. Oct 2 20:32:43.566304 kernel: audit: type=1131 audit(1696278763.562:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.561762 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 20:32:43.561905 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 20:32:43.571495 kernel: audit: type=1131 audit(1696278763.567:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.562755 systemd[1]: Stopped target cryptsetup.target. Oct 2 20:32:43.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.566797 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 20:32:43.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.566943 systemd[1]: Stopped dracut-initqueue.service. Oct 2 20:32:43.567909 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 20:32:43.568069 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 20:32:43.585738 iscsid[637]: iscsid shutting down. Oct 2 20:32:43.572092 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 20:32:43.572233 systemd[1]: Stopped ignition-files.service. Oct 2 20:32:43.573781 systemd[1]: Stopping ignition-mount.service... Oct 2 20:32:43.580849 systemd[1]: Stopping iscsid.service... Oct 2 20:32:43.586783 systemd[1]: Stopping sysroot-boot.service... Oct 2 20:32:43.591552 ignition[811]: INFO : Ignition 2.14.0 Oct 2 20:32:43.594061 ignition[811]: INFO : Stage: umount Oct 2 20:32:43.594061 ignition[811]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:32:43.594061 ignition[811]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 20:32:43.594061 ignition[811]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 20:32:43.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.593638 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 20:32:43.599246 ignition[811]: INFO : umount: umount passed Oct 2 20:32:43.599246 ignition[811]: INFO : Ignition finished successfully Oct 2 20:32:43.593847 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 20:32:43.594817 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 20:32:43.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.594957 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 20:32:43.599129 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 20:32:43.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.600373 systemd[1]: Stopped iscsid.service. Oct 2 20:32:43.601890 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 20:32:43.601982 systemd[1]: Stopped ignition-mount.service. Oct 2 20:32:43.604031 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 20:32:43.604113 systemd[1]: Finished initrd-cleanup.service. Oct 2 20:32:43.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.606755 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 20:32:43.606804 systemd[1]: Stopped ignition-disks.service. Oct 2 20:32:43.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.607799 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 20:32:43.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.607835 systemd[1]: Stopped ignition-kargs.service. Oct 2 20:32:43.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.608731 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 2 20:32:43.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.608768 systemd[1]: Stopped ignition-fetch.service. Oct 2 20:32:43.615930 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 20:32:43.615970 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 20:32:43.616877 systemd[1]: Stopped target paths.target. Oct 2 20:32:43.617861 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 20:32:43.622380 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 20:32:43.622854 systemd[1]: Stopped target slices.target. Oct 2 20:32:43.623739 systemd[1]: Stopped target sockets.target. Oct 2 20:32:43.624639 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 20:32:43.624675 systemd[1]: Closed iscsid.socket. Oct 2 20:32:43.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.625484 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 20:32:43.625523 systemd[1]: Stopped ignition-setup.service. Oct 2 20:32:43.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.626428 systemd[1]: Stopping iscsiuio.service... Oct 2 20:32:43.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.629067 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 20:32:43.629542 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 20:32:43.629640 systemd[1]: Stopped iscsiuio.service. Oct 2 20:32:43.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.630797 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 20:32:43.630885 systemd[1]: Stopped sysroot-boot.service. Oct 2 20:32:43.631714 systemd[1]: Stopped target network.target. Oct 2 20:32:43.632488 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 20:32:43.632519 systemd[1]: Closed iscsiuio.socket. Oct 2 20:32:43.633368 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 20:32:43.633411 systemd[1]: Stopped initrd-setup-root.service. Oct 2 20:32:43.634404 systemd[1]: Stopping systemd-networkd.service... Oct 2 20:32:43.635569 systemd[1]: Stopping systemd-resolved.service... Oct 2 20:32:43.638585 systemd-networkd[629]: eth0: DHCPv6 lease lost Oct 2 20:32:43.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.640777 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 20:32:43.640880 systemd[1]: Stopped systemd-networkd.service. Oct 2 20:32:43.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.642882 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 20:32:43.644000 audit: BPF prog-id=9 op=UNLOAD Oct 2 20:32:43.644000 audit: BPF prog-id=6 op=UNLOAD Oct 2 20:32:43.643000 systemd[1]: Stopped systemd-resolved.service. Oct 2 20:32:43.644897 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 20:32:43.644933 systemd[1]: Closed systemd-networkd.socket. Oct 2 20:32:43.646483 systemd[1]: Stopping network-cleanup.service... Oct 2 20:32:43.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.647142 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 20:32:43.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.647198 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 20:32:43.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.649439 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 20:32:43.649477 systemd[1]: Stopped systemd-sysctl.service. Oct 2 20:32:43.650770 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 20:32:43.650812 systemd[1]: Stopped systemd-modules-load.service. Oct 2 20:32:43.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.651650 systemd[1]: Stopping systemd-udevd.service... Oct 2 20:32:43.653925 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 20:32:43.654597 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 20:32:43.654728 systemd[1]: Stopped systemd-udevd.service. Oct 2 20:32:43.657339 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 20:32:43.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.657433 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 20:32:43.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.659640 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 20:32:43.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.659675 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 20:32:43.660528 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 20:32:43.660574 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 20:32:43.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.661496 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 20:32:43.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.661536 systemd[1]: Stopped dracut-cmdline.service. Oct 2 20:32:43.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.662543 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 20:32:43.662579 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 20:32:43.664056 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 20:32:43.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.674208 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 20:32:43.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:43.674254 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 20:32:43.675426 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 20:32:43.675495 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 20:32:43.676210 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 20:32:43.676246 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 20:32:43.678011 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 20:32:43.678539 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 20:32:43.678640 systemd[1]: Stopped network-cleanup.service. Oct 2 20:32:43.679430 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 20:32:43.679519 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 20:32:43.680297 systemd[1]: Reached target initrd-switch-root.target. Oct 2 20:32:43.681976 systemd[1]: Starting initrd-switch-root.service... Oct 2 20:32:43.701298 systemd[1]: Switching root. Oct 2 20:32:43.718315 systemd-journald[184]: Journal stopped Oct 2 20:32:47.937814 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Oct 2 20:32:47.937876 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 20:32:47.937891 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 20:32:47.937902 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 20:32:47.937913 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 20:32:47.937924 kernel: SELinux: policy capability open_perms=1 Oct 2 20:32:47.937942 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 20:32:47.937956 kernel: SELinux: policy capability always_check_network=0 Oct 2 20:32:47.937967 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 20:32:47.937978 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 20:32:47.937988 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 20:32:47.938005 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 20:32:47.938016 systemd[1]: Successfully loaded SELinux policy in 98.244ms. Oct 2 20:32:47.938040 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.338ms. Oct 2 20:32:47.938054 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 20:32:47.938068 systemd[1]: Detected virtualization kvm. Oct 2 20:32:47.938080 systemd[1]: Detected architecture x86-64. Oct 2 20:32:47.938091 systemd[1]: Detected first boot. Oct 2 20:32:47.938103 systemd[1]: Hostname set to . Oct 2 20:32:47.938114 systemd[1]: Initializing machine ID from VM UUID. Oct 2 20:32:47.938126 systemd[1]: Populated /etc with preset unit settings. Oct 2 20:32:47.938158 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 20:32:47.938175 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 20:32:47.938189 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 20:32:47.938343 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 20:32:47.938366 systemd[1]: Stopped initrd-switch-root.service. Oct 2 20:32:47.938379 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 20:32:47.938391 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 20:32:47.938403 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 20:32:47.938415 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 2 20:32:47.938430 systemd[1]: Created slice system-getty.slice. Oct 2 20:32:47.938441 systemd[1]: Created slice system-modprobe.slice. Oct 2 20:32:47.938452 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 20:32:47.938464 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 20:32:47.938475 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 20:32:47.938487 systemd[1]: Created slice user.slice. Oct 2 20:32:47.938499 systemd[1]: Started systemd-ask-password-console.path. Oct 2 20:32:47.938510 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 20:32:47.938521 systemd[1]: Set up automount boot.automount. Oct 2 20:32:47.938535 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 20:32:47.938547 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 20:32:47.938560 systemd[1]: Stopped target initrd-fs.target. Oct 2 20:32:47.938592 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 20:32:47.938605 systemd[1]: Reached target integritysetup.target. Oct 2 20:32:47.938616 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 20:32:47.938630 systemd[1]: Reached target remote-fs.target. Oct 2 20:32:47.938642 systemd[1]: Reached target slices.target. Oct 2 20:32:47.938654 systemd[1]: Reached target swap.target. Oct 2 20:32:47.938666 systemd[1]: Reached target torcx.target. Oct 2 20:32:47.938677 systemd[1]: Reached target veritysetup.target. Oct 2 20:32:47.938689 systemd[1]: Listening on systemd-coredump.socket. Oct 2 20:32:47.938700 systemd[1]: Listening on systemd-initctl.socket. Oct 2 20:32:47.938713 systemd[1]: Listening on systemd-networkd.socket. Oct 2 20:32:47.938725 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 20:32:47.938736 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 20:32:47.938749 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 20:32:47.938762 systemd[1]: Mounting dev-hugepages.mount... Oct 2 20:32:47.938773 systemd[1]: Mounting dev-mqueue.mount... Oct 2 20:32:47.938785 systemd[1]: Mounting media.mount... Oct 2 20:32:47.938796 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 20:32:47.938809 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 20:32:47.938820 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 20:32:47.938832 systemd[1]: Mounting tmp.mount... Oct 2 20:32:47.938861 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 20:32:47.938876 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 20:32:47.938888 systemd[1]: Starting kmod-static-nodes.service... Oct 2 20:32:47.938900 systemd[1]: Starting modprobe@configfs.service... Oct 2 20:32:47.938912 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 20:32:47.938923 systemd[1]: Starting modprobe@drm.service... Oct 2 20:32:47.938935 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 20:32:47.938946 systemd[1]: Starting modprobe@fuse.service... Oct 2 20:32:47.938957 systemd[1]: Starting modprobe@loop.service... Oct 2 20:32:47.938968 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 20:32:47.938982 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 20:32:47.938994 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 20:32:47.939005 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 20:32:47.939016 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 20:32:47.939027 systemd[1]: Stopped systemd-journald.service. Oct 2 20:32:47.939039 systemd[1]: Starting systemd-journald.service... Oct 2 20:32:47.939050 kernel: fuse: init (API version 7.34) Oct 2 20:32:47.939060 systemd[1]: Starting systemd-modules-load.service... Oct 2 20:32:47.939072 systemd[1]: Starting systemd-network-generator.service... Oct 2 20:32:47.939085 systemd[1]: Starting systemd-remount-fs.service... Oct 2 20:32:47.939096 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 20:32:47.939125 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 20:32:47.939137 systemd[1]: Stopped verity-setup.service. Oct 2 20:32:47.939149 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 20:32:47.939161 systemd[1]: Mounted dev-hugepages.mount. Oct 2 20:32:47.939172 systemd[1]: Mounted dev-mqueue.mount. Oct 2 20:32:47.939184 systemd[1]: Mounted media.mount. Oct 2 20:32:47.939195 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 20:32:47.939214 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 20:32:47.939226 systemd[1]: Mounted tmp.mount. Oct 2 20:32:47.939238 systemd[1]: Finished kmod-static-nodes.service. Oct 2 20:32:47.939249 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 20:32:47.939260 systemd[1]: Finished modprobe@configfs.service. Oct 2 20:32:47.939271 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 20:32:47.939283 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 20:32:47.939294 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 20:32:47.939306 systemd[1]: Finished modprobe@drm.service. Oct 2 20:32:47.940206 kernel: loop: module loaded Oct 2 20:32:47.940227 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 20:32:47.940241 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 20:32:47.940254 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 20:32:47.940269 systemd-journald[923]: Journal started Oct 2 20:32:47.941463 systemd-journald[923]: Runtime Journal (/run/log/journal/458131273c994b8ca43fdbbefc125c93) is 4.9M, max 39.5M, 34.5M free. Oct 2 20:32:47.942410 systemd[1]: Finished modprobe@fuse.service. Oct 2 20:32:47.942437 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 20:32:44.033000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 20:32:44.170000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 20:32:44.170000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 20:32:44.171000 audit: BPF prog-id=10 op=LOAD Oct 2 20:32:44.171000 audit: BPF prog-id=10 op=UNLOAD Oct 2 20:32:44.171000 audit: BPF prog-id=11 op=LOAD Oct 2 20:32:44.171000 audit: BPF prog-id=11 op=UNLOAD Oct 2 20:32:47.713000 audit: BPF prog-id=12 op=LOAD Oct 2 20:32:47.713000 audit: BPF prog-id=3 op=UNLOAD Oct 2 20:32:47.713000 audit: BPF prog-id=13 op=LOAD Oct 2 20:32:47.713000 audit: BPF prog-id=14 op=LOAD Oct 2 20:32:47.713000 audit: BPF prog-id=4 op=UNLOAD Oct 2 20:32:47.713000 audit: BPF prog-id=5 op=UNLOAD Oct 2 20:32:47.715000 audit: BPF prog-id=15 op=LOAD Oct 2 20:32:47.715000 audit: BPF prog-id=12 op=UNLOAD Oct 2 20:32:47.715000 audit: BPF prog-id=16 op=LOAD Oct 2 20:32:47.715000 audit: BPF prog-id=17 op=LOAD Oct 2 20:32:47.715000 audit: BPF prog-id=13 op=UNLOAD Oct 2 20:32:47.715000 audit: BPF prog-id=14 op=UNLOAD Oct 2 20:32:47.716000 audit: BPF prog-id=18 op=LOAD Oct 2 20:32:47.716000 audit: BPF prog-id=15 op=UNLOAD Oct 2 20:32:47.716000 audit: BPF prog-id=19 op=LOAD Oct 2 20:32:47.716000 audit: BPF prog-id=20 op=LOAD Oct 2 20:32:47.716000 audit: BPF prog-id=16 op=UNLOAD Oct 2 20:32:47.716000 audit: BPF prog-id=17 op=UNLOAD Oct 2 20:32:47.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.724000 audit: BPF prog-id=18 op=UNLOAD Oct 2 20:32:47.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.867000 audit: BPF prog-id=21 op=LOAD Oct 2 20:32:47.870000 audit: BPF prog-id=22 op=LOAD Oct 2 20:32:47.870000 audit: BPF prog-id=23 op=LOAD Oct 2 20:32:47.871000 audit: BPF prog-id=19 op=UNLOAD Oct 2 20:32:47.871000 audit: BPF prog-id=20 op=UNLOAD Oct 2 20:32:47.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.936000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 20:32:47.936000 audit[923]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd50245410 a2=4000 a3=7ffd502454ac items=0 ppid=1 pid=923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:32:47.936000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 20:32:47.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:44.361587 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 20:32:47.712161 systemd[1]: Queued start job for default target multi-user.target. Oct 2 20:32:47.945535 systemd[1]: Finished modprobe@loop.service. Oct 2 20:32:47.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:44.362380 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:44Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 20:32:47.712175 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 20:32:44.362402 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:44Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 20:32:47.717168 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 20:32:44.362442 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:44Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 20:32:44.362454 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:44Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 20:32:44.362487 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:44Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 20:32:44.362502 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:44Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 20:32:47.947366 systemd[1]: Started systemd-journald.service. Oct 2 20:32:47.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:44.362727 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:44Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 20:32:47.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.947496 systemd[1]: Finished systemd-modules-load.service. Oct 2 20:32:44.362771 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:44Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 20:32:44.362789 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:44Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 20:32:44.363770 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:44Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 20:32:44.363811 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:44Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 20:32:44.363833 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:44Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 20:32:44.363852 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:44Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 20:32:44.363871 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:44Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 20:32:44.363888 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:44Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 20:32:47.295481 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:47Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:32:47.295838 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:47Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:32:47.295995 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:47Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:32:47.296200 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:47Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:32:47.296262 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:47Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 20:32:47.296387 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:32:47Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 20:32:47.952524 systemd[1]: Finished systemd-network-generator.service. Oct 2 20:32:47.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.953865 systemd[1]: Finished systemd-remount-fs.service. Oct 2 20:32:47.954780 systemd[1]: Reached target network-pre.target. Oct 2 20:32:47.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.956776 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 20:32:47.958314 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 20:32:47.961465 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 20:32:47.964044 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 20:32:47.966481 systemd[1]: Starting systemd-journal-flush.service... Oct 2 20:32:47.966988 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 20:32:47.968438 systemd[1]: Starting systemd-random-seed.service... Oct 2 20:32:47.968936 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 20:32:47.970499 systemd[1]: Starting systemd-sysctl.service... Oct 2 20:32:47.973664 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 20:32:47.974356 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 20:32:47.980576 systemd-journald[923]: Time spent on flushing to /var/log/journal/458131273c994b8ca43fdbbefc125c93 is 48.530ms for 1117 entries. Oct 2 20:32:47.980576 systemd-journald[923]: System Journal (/var/log/journal/458131273c994b8ca43fdbbefc125c93) is 8.0M, max 584.8M, 576.8M free. Oct 2 20:32:48.045943 systemd-journald[923]: Received client request to flush runtime journal. Oct 2 20:32:47.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:48.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:48.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:48.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:48.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:47.995528 systemd[1]: Finished systemd-random-seed.service. Oct 2 20:32:47.996195 systemd[1]: Reached target first-boot-complete.target. Oct 2 20:32:48.004009 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 20:32:48.006070 systemd[1]: Starting systemd-sysusers.service... Oct 2 20:32:48.011946 systemd[1]: Finished systemd-sysctl.service. Oct 2 20:32:48.032194 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 20:32:48.034440 systemd[1]: Starting systemd-udev-settle.service... Oct 2 20:32:48.039299 systemd[1]: Finished systemd-sysusers.service. Oct 2 20:32:48.041304 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 20:32:48.050190 systemd[1]: Finished systemd-journal-flush.service. Oct 2 20:32:48.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:48.052201 udevadm[954]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 20:32:48.088636 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 20:32:48.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:48.733729 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 20:32:48.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:48.737399 kernel: kauditd_printk_skb: 99 callbacks suppressed Oct 2 20:32:48.737485 kernel: audit: type=1130 audit(1696278768.734:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:48.746000 audit: BPF prog-id=24 op=LOAD Oct 2 20:32:48.750021 kernel: audit: type=1334 audit(1696278768.746:147): prog-id=24 op=LOAD Oct 2 20:32:48.749000 audit: BPF prog-id=25 op=LOAD Oct 2 20:32:48.750000 audit: BPF prog-id=7 op=UNLOAD Oct 2 20:32:48.756418 kernel: audit: type=1334 audit(1696278768.749:148): prog-id=25 op=LOAD Oct 2 20:32:48.756508 kernel: audit: type=1334 audit(1696278768.750:149): prog-id=7 op=UNLOAD Oct 2 20:32:48.756643 kernel: audit: type=1334 audit(1696278768.750:150): prog-id=8 op=UNLOAD Oct 2 20:32:48.750000 audit: BPF prog-id=8 op=UNLOAD Oct 2 20:32:48.753916 systemd[1]: Starting systemd-udevd.service... Oct 2 20:32:48.801678 systemd-udevd[958]: Using default interface naming scheme 'v252'. Oct 2 20:32:48.868002 systemd[1]: Started systemd-udevd.service. Oct 2 20:32:48.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:48.885404 kernel: audit: type=1130 audit(1696278768.868:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:48.886949 systemd[1]: Starting systemd-networkd.service... Oct 2 20:32:48.874000 audit: BPF prog-id=26 op=LOAD Oct 2 20:32:48.896414 kernel: audit: type=1334 audit(1696278768.874:152): prog-id=26 op=LOAD Oct 2 20:32:48.903000 audit: BPF prog-id=27 op=LOAD Oct 2 20:32:48.907431 kernel: audit: type=1334 audit(1696278768.903:153): prog-id=27 op=LOAD Oct 2 20:32:48.907000 audit: BPF prog-id=28 op=LOAD Oct 2 20:32:48.912360 kernel: audit: type=1334 audit(1696278768.907:154): prog-id=28 op=LOAD Oct 2 20:32:48.913135 systemd[1]: Starting systemd-userdbd.service... Oct 2 20:32:48.911000 audit: BPF prog-id=29 op=LOAD Oct 2 20:32:48.917070 kernel: audit: type=1334 audit(1696278768.911:155): prog-id=29 op=LOAD Oct 2 20:32:48.976444 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 20:32:48.978804 systemd[1]: Started systemd-userdbd.service. Oct 2 20:32:48.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:49.005236 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 20:32:49.039516 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 2 20:32:49.050061 kernel: ACPI: button: Power Button [PWRF] Oct 2 20:32:49.081120 systemd-networkd[971]: lo: Link UP Oct 2 20:32:49.081131 systemd-networkd[971]: lo: Gained carrier Oct 2 20:32:49.082732 systemd-networkd[971]: Enumeration completed Oct 2 20:32:49.082852 systemd[1]: Started systemd-networkd.service. Oct 2 20:32:49.082858 systemd-networkd[971]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 20:32:49.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:49.084851 systemd-networkd[971]: eth0: Link UP Oct 2 20:32:49.084858 systemd-networkd[971]: eth0: Gained carrier Oct 2 20:32:49.067000 audit[976]: AVC avc: denied { confidentiality } for pid=976 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 20:32:49.093452 systemd-networkd[971]: eth0: DHCPv4 address 172.24.4.121/24, gateway 172.24.4.1 acquired from 172.24.4.1 Oct 2 20:32:49.067000 audit[976]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c708a72060 a1=32194 a2=7fe2fdfd3bc5 a3=5 items=106 ppid=958 pid=976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:32:49.067000 audit: CWD cwd="/" Oct 2 20:32:49.067000 audit: PATH item=0 name=(null) inode=13308 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=1 name=(null) inode=13309 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=2 name=(null) inode=13308 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=3 name=(null) inode=13310 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=4 name=(null) inode=13308 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=5 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=6 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=7 name=(null) inode=13312 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=8 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=9 name=(null) inode=14337 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=10 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=11 name=(null) inode=14338 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=12 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=13 name=(null) inode=14339 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=14 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=15 name=(null) inode=14340 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=16 name=(null) inode=13308 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=17 name=(null) inode=14341 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=18 name=(null) inode=14341 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=19 name=(null) inode=14342 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=20 name=(null) inode=14341 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=21 name=(null) inode=14343 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=22 name=(null) inode=14341 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=23 name=(null) inode=14344 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=24 name=(null) inode=14341 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=25 name=(null) inode=14345 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=26 name=(null) inode=14341 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=27 name=(null) inode=14346 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=28 name=(null) inode=13308 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=29 name=(null) inode=14347 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=30 name=(null) inode=14347 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=31 name=(null) inode=14348 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=32 name=(null) inode=14347 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=33 name=(null) inode=14349 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=34 name=(null) inode=14347 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=35 name=(null) inode=14350 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=36 name=(null) inode=14347 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=37 name=(null) inode=14351 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=38 name=(null) inode=14347 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=39 name=(null) inode=14352 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=40 name=(null) inode=13308 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=41 name=(null) inode=14353 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=42 name=(null) inode=14353 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=43 name=(null) inode=14354 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=44 name=(null) inode=14353 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=45 name=(null) inode=14355 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=46 name=(null) inode=14353 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=47 name=(null) inode=14356 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=48 name=(null) inode=14353 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=49 name=(null) inode=14357 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=50 name=(null) inode=14353 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=51 name=(null) inode=14358 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=52 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=53 name=(null) inode=14359 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=54 name=(null) inode=14359 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=55 name=(null) inode=14360 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=56 name=(null) inode=14359 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=57 name=(null) inode=14361 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=58 name=(null) inode=14359 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=59 name=(null) inode=14362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=60 name=(null) inode=14362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=61 name=(null) inode=14363 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=62 name=(null) inode=14362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=63 name=(null) inode=14364 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=64 name=(null) inode=14362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=65 name=(null) inode=14365 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=66 name=(null) inode=14362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=67 name=(null) inode=14366 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=68 name=(null) inode=14362 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=69 name=(null) inode=14367 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=70 name=(null) inode=14359 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=71 name=(null) inode=14368 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=72 name=(null) inode=14368 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=73 name=(null) inode=14369 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=74 name=(null) inode=14368 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=75 name=(null) inode=14370 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=76 name=(null) inode=14368 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=77 name=(null) inode=14371 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=78 name=(null) inode=14368 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=79 name=(null) inode=14372 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=80 name=(null) inode=14368 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=81 name=(null) inode=14373 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=82 name=(null) inode=14359 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=83 name=(null) inode=14374 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=84 name=(null) inode=14374 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=85 name=(null) inode=14375 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=86 name=(null) inode=14374 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=87 name=(null) inode=14376 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=88 name=(null) inode=14374 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=89 name=(null) inode=14377 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=90 name=(null) inode=14374 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=91 name=(null) inode=14378 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=92 name=(null) inode=14374 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=93 name=(null) inode=14379 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=94 name=(null) inode=14359 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=95 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=96 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=97 name=(null) inode=14381 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=98 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=99 name=(null) inode=14382 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=100 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=101 name=(null) inode=14383 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.125364 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 2 20:32:49.067000 audit: PATH item=102 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=103 name=(null) inode=14384 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=104 name=(null) inode=14380 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PATH item=105 name=(null) inode=14385 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:32:49.067000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 20:32:49.135382 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 2 20:32:49.139353 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 20:32:49.165844 systemd[1]: Finished systemd-udev-settle.service. Oct 2 20:32:49.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:49.167646 systemd[1]: Starting lvm2-activation-early.service... Oct 2 20:32:49.200006 lvm[987]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 20:32:49.227334 systemd[1]: Finished lvm2-activation-early.service. Oct 2 20:32:49.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:49.227965 systemd[1]: Reached target cryptsetup.target. Oct 2 20:32:49.229622 systemd[1]: Starting lvm2-activation.service... Oct 2 20:32:49.239913 lvm[988]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 20:32:49.280189 systemd[1]: Finished lvm2-activation.service. Oct 2 20:32:49.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:49.281524 systemd[1]: Reached target local-fs-pre.target. Oct 2 20:32:49.282683 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 20:32:49.282744 systemd[1]: Reached target local-fs.target. Oct 2 20:32:49.283857 systemd[1]: Reached target machines.target. Oct 2 20:32:49.287564 systemd[1]: Starting ldconfig.service... Oct 2 20:32:49.290035 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 20:32:49.290150 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 20:32:49.292497 systemd[1]: Starting systemd-boot-update.service... Oct 2 20:32:49.296828 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 20:32:49.309382 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 20:32:49.312471 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 20:32:49.312689 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 20:32:49.318638 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 20:32:49.320564 systemd[1]: boot.automount: Got automount request for /boot, triggered by 990 (bootctl) Oct 2 20:32:49.325596 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 20:32:49.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:49.353963 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 20:32:49.370901 systemd-tmpfiles[993]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 20:32:49.377242 systemd-tmpfiles[993]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 20:32:49.381394 systemd-tmpfiles[993]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 20:32:49.986974 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 20:32:49.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:49.989320 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 20:32:50.068218 systemd-fsck[999]: fsck.fat 4.2 (2021-01-31) Oct 2 20:32:50.068218 systemd-fsck[999]: /dev/vda1: 789 files, 115069/258078 clusters Oct 2 20:32:50.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:50.072094 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 20:32:50.077796 systemd[1]: Mounting boot.mount... Oct 2 20:32:50.102046 systemd[1]: Mounted boot.mount. Oct 2 20:32:50.140924 systemd[1]: Finished systemd-boot-update.service. Oct 2 20:32:50.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:50.225218 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 20:32:50.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:50.229586 systemd[1]: Starting audit-rules.service... Oct 2 20:32:50.233290 systemd[1]: Starting clean-ca-certificates.service... Oct 2 20:32:50.238642 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 20:32:50.243000 audit: BPF prog-id=30 op=LOAD Oct 2 20:32:50.248507 systemd[1]: Starting systemd-resolved.service... Oct 2 20:32:50.249000 audit: BPF prog-id=31 op=LOAD Oct 2 20:32:50.251295 systemd[1]: Starting systemd-timesyncd.service... Oct 2 20:32:50.254512 systemd[1]: Starting systemd-update-utmp.service... Oct 2 20:32:50.262699 systemd[1]: Finished clean-ca-certificates.service. Oct 2 20:32:50.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:50.263411 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 20:32:50.272000 audit[1013]: SYSTEM_BOOT pid=1013 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 20:32:50.278518 systemd[1]: Finished systemd-update-utmp.service. Oct 2 20:32:50.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:50.303751 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 20:32:50.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:32:50.340000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 20:32:50.340000 audit[1022]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd3191b040 a2=420 a3=0 items=0 ppid=1002 pid=1022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:32:50.340000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 20:32:50.341922 augenrules[1022]: No rules Oct 2 20:32:50.342169 systemd[1]: Finished audit-rules.service. Oct 2 20:32:50.355796 systemd-resolved[1011]: Positive Trust Anchors: Oct 2 20:32:50.355814 systemd-resolved[1011]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 20:32:50.355857 systemd-resolved[1011]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 20:32:50.363033 systemd[1]: Started systemd-timesyncd.service. Oct 2 20:32:50.363735 systemd[1]: Reached target time-set.target. Oct 2 20:32:50.376795 systemd-resolved[1011]: Using system hostname 'ci-3510-3-0-6-a35538653c.novalocal'. Oct 2 20:32:50.379024 systemd[1]: Started systemd-resolved.service. Oct 2 20:32:50.379674 systemd[1]: Reached target network.target. Oct 2 20:32:50.380156 systemd[1]: Reached target nss-lookup.target. Oct 2 20:32:51.262591 systemd-timesyncd[1012]: Contacted time server 162.19.224.29:123 (0.flatcar.pool.ntp.org). Oct 2 20:32:51.262679 systemd-resolved[1011]: Clock change detected. Flushing caches. Oct 2 20:32:51.263419 systemd-timesyncd[1012]: Initial clock synchronization to Mon 2023-10-02 20:32:51.262483 UTC. Oct 2 20:32:51.442311 ldconfig[989]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 20:32:51.458279 systemd[1]: Finished ldconfig.service. Oct 2 20:32:51.462494 systemd[1]: Starting systemd-update-done.service... Oct 2 20:32:51.477024 systemd[1]: Finished systemd-update-done.service. Oct 2 20:32:51.478400 systemd[1]: Reached target sysinit.target. Oct 2 20:32:51.479721 systemd[1]: Started motdgen.path. Oct 2 20:32:51.480810 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 20:32:51.482635 systemd[1]: Started logrotate.timer. Oct 2 20:32:51.483901 systemd[1]: Started mdadm.timer. Oct 2 20:32:51.484910 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 20:32:51.486041 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 20:32:51.486117 systemd[1]: Reached target paths.target. Oct 2 20:32:51.487292 systemd[1]: Reached target timers.target. Oct 2 20:32:51.490307 systemd[1]: Listening on dbus.socket. Oct 2 20:32:51.493840 systemd[1]: Starting docker.socket... Oct 2 20:32:51.501454 systemd[1]: Listening on sshd.socket. Oct 2 20:32:51.502720 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 20:32:51.503673 systemd[1]: Listening on docker.socket. Oct 2 20:32:51.504879 systemd[1]: Reached target sockets.target. Oct 2 20:32:51.505982 systemd[1]: Reached target basic.target. Oct 2 20:32:51.507235 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 20:32:51.507305 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 20:32:51.509505 systemd[1]: Starting containerd.service... Oct 2 20:32:51.512699 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 2 20:32:51.516253 systemd[1]: Starting dbus.service... Oct 2 20:32:51.522944 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 20:32:51.531554 systemd-networkd[971]: eth0: Gained IPv6LL Oct 2 20:32:51.532245 systemd[1]: Starting extend-filesystems.service... Oct 2 20:32:51.533600 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 20:32:51.536395 systemd[1]: Starting motdgen.service... Oct 2 20:32:51.541748 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 20:32:51.548353 systemd[1]: Starting prepare-critools.service... Oct 2 20:32:51.552100 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 20:32:51.557204 systemd[1]: Starting sshd-keygen.service... Oct 2 20:32:51.568760 systemd[1]: Starting systemd-logind.service... Oct 2 20:32:51.569301 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 20:32:51.569366 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 20:32:51.569881 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 20:32:51.570736 systemd[1]: Starting update-engine.service... Oct 2 20:32:51.573306 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 20:32:51.580688 jq[1036]: false Oct 2 20:32:51.580927 jq[1052]: true Oct 2 20:32:51.581694 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 20:32:51.581926 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 20:32:51.594307 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 20:32:51.594494 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 20:32:51.602976 tar[1054]: ./ Oct 2 20:32:51.602976 tar[1054]: ./loopback Oct 2 20:32:51.606329 tar[1055]: crictl Oct 2 20:32:51.634322 jq[1056]: true Oct 2 20:32:51.647683 dbus-daemon[1033]: [system] SELinux support is enabled Oct 2 20:32:51.648588 systemd[1]: Started dbus.service. Oct 2 20:32:51.651506 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 20:32:51.653946 systemd[1]: Finished motdgen.service. Oct 2 20:32:51.654622 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 20:32:51.654652 systemd[1]: Reached target system-config.target. Oct 2 20:32:51.659412 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 20:32:51.659441 systemd[1]: Reached target user-config.target. Oct 2 20:32:51.663943 extend-filesystems[1037]: Found vda Oct 2 20:32:51.666759 extend-filesystems[1037]: Found vda1 Oct 2 20:32:51.668295 extend-filesystems[1037]: Found vda2 Oct 2 20:32:51.669043 extend-filesystems[1037]: Found vda3 Oct 2 20:32:51.669619 extend-filesystems[1037]: Found usr Oct 2 20:32:51.670939 extend-filesystems[1037]: Found vda4 Oct 2 20:32:51.671567 extend-filesystems[1037]: Found vda6 Oct 2 20:32:51.672112 extend-filesystems[1037]: Found vda7 Oct 2 20:32:51.673297 extend-filesystems[1037]: Found vda9 Oct 2 20:32:51.674021 extend-filesystems[1037]: Checking size of /dev/vda9 Oct 2 20:32:51.699513 extend-filesystems[1037]: Resized partition /dev/vda9 Oct 2 20:32:51.704783 update_engine[1051]: I1002 20:32:51.703285 1051 main.cc:92] Flatcar Update Engine starting Oct 2 20:32:51.709701 systemd[1]: Started update-engine.service. Oct 2 20:32:51.709973 update_engine[1051]: I1002 20:32:51.709754 1051 update_check_scheduler.cc:74] Next update check in 8m50s Oct 2 20:32:51.710548 extend-filesystems[1087]: resize2fs 1.46.5 (30-Dec-2021) Oct 2 20:32:51.713024 systemd[1]: Started locksmithd.service. Oct 2 20:32:51.746152 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Oct 2 20:32:51.753835 systemd[1]: Created slice system-sshd.slice. Oct 2 20:32:51.816041 systemd-logind[1049]: Watching system buttons on /dev/input/event1 (Power Button) Oct 2 20:32:51.816177 systemd-logind[1049]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 20:32:51.820356 systemd-logind[1049]: New seat seat0. Oct 2 20:32:51.833669 systemd[1]: Started systemd-logind.service. Oct 2 20:32:51.835222 tar[1054]: ./bandwidth Oct 2 20:32:51.837905 bash[1088]: Updated "/home/core/.ssh/authorized_keys" Oct 2 20:32:51.838281 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 20:32:51.839142 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Oct 2 20:32:51.897796 coreos-metadata[1032]: Oct 02 20:32:51.847 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Oct 2 20:32:51.897796 coreos-metadata[1032]: Oct 02 20:32:51.869 INFO Fetch successful Oct 2 20:32:51.897796 coreos-metadata[1032]: Oct 02 20:32:51.869 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 2 20:32:51.897796 coreos-metadata[1032]: Oct 02 20:32:51.884 INFO Fetch successful Oct 2 20:32:51.898815 env[1057]: time="2023-10-02T20:32:51.848226617Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 20:32:51.898815 env[1057]: time="2023-10-02T20:32:51.878431786Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 20:32:51.898815 env[1057]: time="2023-10-02T20:32:51.896184962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:32:51.899745 extend-filesystems[1087]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 2 20:32:51.899745 extend-filesystems[1087]: old_desc_blocks = 1, new_desc_blocks = 3 Oct 2 20:32:51.899745 extend-filesystems[1087]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Oct 2 20:32:51.915380 extend-filesystems[1037]: Resized filesystem in /dev/vda9 Oct 2 20:32:51.915924 env[1057]: time="2023-10-02T20:32:51.900808915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 20:32:51.915924 env[1057]: time="2023-10-02T20:32:51.901737907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:32:51.915924 env[1057]: time="2023-10-02T20:32:51.902242463Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 20:32:51.915924 env[1057]: time="2023-10-02T20:32:51.902288289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 20:32:51.915924 env[1057]: time="2023-10-02T20:32:51.902311052Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 20:32:51.915924 env[1057]: time="2023-10-02T20:32:51.902324237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 20:32:51.915924 env[1057]: time="2023-10-02T20:32:51.902456595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:32:51.915924 env[1057]: time="2023-10-02T20:32:51.907026977Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:32:51.915924 env[1057]: time="2023-10-02T20:32:51.908400102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 20:32:51.915924 env[1057]: time="2023-10-02T20:32:51.908454865Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 20:32:51.901410 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 20:32:51.916552 env[1057]: time="2023-10-02T20:32:51.908587403Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 20:32:51.916552 env[1057]: time="2023-10-02T20:32:51.908617800Z" level=info msg="metadata content store policy set" policy=shared Oct 2 20:32:51.901686 systemd[1]: Finished extend-filesystems.service. Oct 2 20:32:51.903237 unknown[1032]: wrote ssh authorized keys file for user: core Oct 2 20:32:51.923118 env[1057]: time="2023-10-02T20:32:51.922796853Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 20:32:51.923118 env[1057]: time="2023-10-02T20:32:51.922830506Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 20:32:51.923118 env[1057]: time="2023-10-02T20:32:51.922853209Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 20:32:51.923118 env[1057]: time="2023-10-02T20:32:51.922907350Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 20:32:51.923118 env[1057]: time="2023-10-02T20:32:51.922927218Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 20:32:51.923118 env[1057]: time="2023-10-02T20:32:51.922944961Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 20:32:51.923118 env[1057]: time="2023-10-02T20:32:51.923000295Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 20:32:51.923118 env[1057]: time="2023-10-02T20:32:51.923018308Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 20:32:51.923118 env[1057]: time="2023-10-02T20:32:51.923033246Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 20:32:51.923118 env[1057]: time="2023-10-02T20:32:51.923050399Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 20:32:51.923118 env[1057]: time="2023-10-02T20:32:51.923065216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 20:32:51.923118 env[1057]: time="2023-10-02T20:32:51.923079854Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 20:32:51.928758 env[1057]: time="2023-10-02T20:32:51.927004414Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 20:32:51.928758 env[1057]: time="2023-10-02T20:32:51.927171287Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 20:32:51.928758 env[1057]: time="2023-10-02T20:32:51.927530751Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 20:32:51.928758 env[1057]: time="2023-10-02T20:32:51.927562401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 20:32:51.928758 env[1057]: time="2023-10-02T20:32:51.927577449Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 20:32:51.928758 env[1057]: time="2023-10-02T20:32:51.927650506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 20:32:51.928758 env[1057]: time="2023-10-02T20:32:51.927669021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 20:32:51.928758 env[1057]: time="2023-10-02T20:32:51.927682797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 20:32:51.928758 env[1057]: time="2023-10-02T20:32:51.927755603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 20:32:51.928758 env[1057]: time="2023-10-02T20:32:51.927775260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 20:32:51.928758 env[1057]: time="2023-10-02T20:32:51.927789777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 20:32:51.928758 env[1057]: time="2023-10-02T20:32:51.927802591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 20:32:51.928758 env[1057]: time="2023-10-02T20:32:51.927815205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 20:32:51.928758 env[1057]: time="2023-10-02T20:32:51.927838368Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 20:32:51.928758 env[1057]: time="2023-10-02T20:32:51.927992648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 20:32:51.934439 env[1057]: time="2023-10-02T20:32:51.928016743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 20:32:51.934439 env[1057]: time="2023-10-02T20:32:51.928033394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 20:32:51.934439 env[1057]: time="2023-10-02T20:32:51.928048052Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 20:32:51.934439 env[1057]: time="2023-10-02T20:32:51.928066376Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 20:32:51.934439 env[1057]: time="2023-10-02T20:32:51.928079210Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 20:32:51.934439 env[1057]: time="2023-10-02T20:32:51.928098777Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 20:32:51.934439 env[1057]: time="2023-10-02T20:32:51.928150854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 20:32:51.933111 systemd[1]: Started containerd.service. Oct 2 20:32:51.936114 env[1057]: time="2023-10-02T20:32:51.928373121Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 20:32:51.936114 env[1057]: time="2023-10-02T20:32:51.928440107Z" level=info msg="Connect containerd service" Oct 2 20:32:51.936114 env[1057]: time="2023-10-02T20:32:51.928474161Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 20:32:51.936114 env[1057]: time="2023-10-02T20:32:51.932630646Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 20:32:51.936114 env[1057]: time="2023-10-02T20:32:51.932892347Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 20:32:51.936114 env[1057]: time="2023-10-02T20:32:51.932930829Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 20:32:51.936114 env[1057]: time="2023-10-02T20:32:51.932983949Z" level=info msg="containerd successfully booted in 0.103750s" Oct 2 20:32:51.936114 env[1057]: time="2023-10-02T20:32:51.934842765Z" level=info msg="Start subscribing containerd event" Oct 2 20:32:51.936114 env[1057]: time="2023-10-02T20:32:51.934896195Z" level=info msg="Start recovering state" Oct 2 20:32:51.936114 env[1057]: time="2023-10-02T20:32:51.934962229Z" level=info msg="Start event monitor" Oct 2 20:32:51.936114 env[1057]: time="2023-10-02T20:32:51.934984772Z" level=info msg="Start snapshots syncer" Oct 2 20:32:51.936114 env[1057]: time="2023-10-02T20:32:51.934995862Z" level=info msg="Start cni network conf syncer for default" Oct 2 20:32:51.936114 env[1057]: time="2023-10-02T20:32:51.935009919Z" level=info msg="Start streaming server" Oct 2 20:32:51.941746 update-ssh-keys[1096]: Updated "/home/core/.ssh/authorized_keys" Oct 2 20:32:51.942368 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 2 20:32:51.986921 tar[1054]: ./ptp Oct 2 20:32:52.069359 tar[1054]: ./vlan Oct 2 20:32:52.145569 tar[1054]: ./host-device Oct 2 20:32:52.219182 tar[1054]: ./tuning Oct 2 20:32:52.286224 tar[1054]: ./vrf Oct 2 20:32:52.359971 tar[1054]: ./sbr Oct 2 20:32:52.380945 systemd[1]: Finished prepare-critools.service. Oct 2 20:32:52.412941 tar[1054]: ./tap Oct 2 20:32:52.455939 tar[1054]: ./dhcp Oct 2 20:32:52.555576 tar[1054]: ./static Oct 2 20:32:52.584106 tar[1054]: ./firewall Oct 2 20:32:52.627488 tar[1054]: ./macvlan Oct 2 20:32:52.667711 tar[1054]: ./dummy Oct 2 20:32:52.707254 tar[1054]: ./bridge Oct 2 20:32:52.751793 tar[1054]: ./ipvlan Oct 2 20:32:52.792101 tar[1054]: ./portmap Oct 2 20:32:52.829964 tar[1054]: ./host-local Oct 2 20:32:52.879933 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 20:32:52.898984 locksmithd[1089]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 20:32:53.039950 sshd_keygen[1064]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 20:32:53.067575 systemd[1]: Finished sshd-keygen.service. Oct 2 20:32:53.072636 systemd[1]: Starting issuegen.service... Oct 2 20:32:53.076878 systemd[1]: Started sshd@0-172.24.4.121:22-172.24.4.1:39276.service. Oct 2 20:32:53.078945 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 20:32:53.079106 systemd[1]: Finished issuegen.service. Oct 2 20:32:53.081076 systemd[1]: Starting systemd-user-sessions.service... Oct 2 20:32:53.088331 systemd[1]: Finished systemd-user-sessions.service. Oct 2 20:32:53.090207 systemd[1]: Started getty@tty1.service. Oct 2 20:32:53.091742 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 20:32:53.092477 systemd[1]: Reached target getty.target. Oct 2 20:32:53.093016 systemd[1]: Reached target multi-user.target. Oct 2 20:32:53.095885 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 20:32:53.106485 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 20:32:53.106641 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 20:32:53.107275 systemd[1]: Startup finished in 951ms (kernel) + 11.165s (initrd) + 8.347s (userspace) = 20.464s. Oct 2 20:32:54.336854 sshd[1114]: Accepted publickey for core from 172.24.4.1 port 39276 ssh2: RSA SHA256:04DBnBUG6fFEh6RHhq3Vh04U7QWBRpG3v2XU9WHMKYg Oct 2 20:32:54.342187 sshd[1114]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:32:54.368235 systemd[1]: Created slice user-500.slice. Oct 2 20:32:54.371292 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 20:32:54.382269 systemd-logind[1049]: New session 1 of user core. Oct 2 20:32:54.394842 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 20:32:54.398381 systemd[1]: Starting user@500.service... Oct 2 20:32:54.406330 (systemd)[1123]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:32:54.530690 systemd[1123]: Queued start job for default target default.target. Oct 2 20:32:54.531395 systemd[1123]: Reached target paths.target. Oct 2 20:32:54.531414 systemd[1123]: Reached target sockets.target. Oct 2 20:32:54.531428 systemd[1123]: Reached target timers.target. Oct 2 20:32:54.531441 systemd[1123]: Reached target basic.target. Oct 2 20:32:54.531486 systemd[1123]: Reached target default.target. Oct 2 20:32:54.531510 systemd[1123]: Startup finished in 111ms. Oct 2 20:32:54.532538 systemd[1]: Started user@500.service. Oct 2 20:32:54.536175 systemd[1]: Started session-1.scope. Oct 2 20:32:55.030002 systemd[1]: Started sshd@1-172.24.4.121:22-172.24.4.1:36552.service. Oct 2 20:32:56.416443 sshd[1132]: Accepted publickey for core from 172.24.4.1 port 36552 ssh2: RSA SHA256:04DBnBUG6fFEh6RHhq3Vh04U7QWBRpG3v2XU9WHMKYg Oct 2 20:32:56.419658 sshd[1132]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:32:56.431791 systemd-logind[1049]: New session 2 of user core. Oct 2 20:32:56.432192 systemd[1]: Started session-2.scope. Oct 2 20:32:57.243413 sshd[1132]: pam_unix(sshd:session): session closed for user core Oct 2 20:32:57.253957 systemd[1]: Started sshd@2-172.24.4.121:22-172.24.4.1:36560.service. Oct 2 20:32:57.255384 systemd[1]: sshd@1-172.24.4.121:22-172.24.4.1:36552.service: Deactivated successfully. Oct 2 20:32:57.256904 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 20:32:57.261509 systemd-logind[1049]: Session 2 logged out. Waiting for processes to exit. Oct 2 20:32:57.264297 systemd-logind[1049]: Removed session 2. Oct 2 20:32:58.500717 sshd[1137]: Accepted publickey for core from 172.24.4.1 port 36560 ssh2: RSA SHA256:04DBnBUG6fFEh6RHhq3Vh04U7QWBRpG3v2XU9WHMKYg Oct 2 20:32:58.504070 sshd[1137]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:32:58.512885 systemd-logind[1049]: New session 3 of user core. Oct 2 20:32:58.520835 systemd[1]: Started session-3.scope. Oct 2 20:32:59.143303 sshd[1137]: pam_unix(sshd:session): session closed for user core Oct 2 20:32:59.149666 systemd[1]: Started sshd@3-172.24.4.121:22-172.24.4.1:36574.service. Oct 2 20:32:59.152747 systemd[1]: sshd@2-172.24.4.121:22-172.24.4.1:36560.service: Deactivated successfully. Oct 2 20:32:59.154230 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 20:32:59.156792 systemd-logind[1049]: Session 3 logged out. Waiting for processes to exit. Oct 2 20:32:59.159670 systemd-logind[1049]: Removed session 3. Oct 2 20:33:00.443249 sshd[1143]: Accepted publickey for core from 172.24.4.1 port 36574 ssh2: RSA SHA256:04DBnBUG6fFEh6RHhq3Vh04U7QWBRpG3v2XU9WHMKYg Oct 2 20:33:00.447329 sshd[1143]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:33:00.458030 systemd-logind[1049]: New session 4 of user core. Oct 2 20:33:00.458903 systemd[1]: Started session-4.scope. Oct 2 20:33:01.181855 sshd[1143]: pam_unix(sshd:session): session closed for user core Oct 2 20:33:01.188271 systemd[1]: Started sshd@4-172.24.4.121:22-172.24.4.1:36586.service. Oct 2 20:33:01.196008 systemd[1]: sshd@3-172.24.4.121:22-172.24.4.1:36574.service: Deactivated successfully. Oct 2 20:33:01.198117 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 20:33:01.203415 systemd-logind[1049]: Session 4 logged out. Waiting for processes to exit. Oct 2 20:33:01.205572 systemd-logind[1049]: Removed session 4. Oct 2 20:33:02.388786 sshd[1149]: Accepted publickey for core from 172.24.4.1 port 36586 ssh2: RSA SHA256:04DBnBUG6fFEh6RHhq3Vh04U7QWBRpG3v2XU9WHMKYg Oct 2 20:33:02.391380 sshd[1149]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:33:02.401619 systemd[1]: Started session-5.scope. Oct 2 20:33:02.403229 systemd-logind[1049]: New session 5 of user core. Oct 2 20:33:02.974485 sudo[1153]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 20:33:02.975524 sudo[1153]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:33:02.986426 dbus-daemon[1033]: \xd0M\x92\xbe\u0005V: received setenforce notice (enforcing=2059321776) Oct 2 20:33:02.990646 sudo[1153]: pam_unix(sudo:session): session closed for user root Oct 2 20:33:03.215195 sshd[1149]: pam_unix(sshd:session): session closed for user core Oct 2 20:33:03.221829 systemd[1]: Started sshd@5-172.24.4.121:22-172.24.4.1:36588.service. Oct 2 20:33:03.224401 systemd[1]: sshd@4-172.24.4.121:22-172.24.4.1:36586.service: Deactivated successfully. Oct 2 20:33:03.225891 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 20:33:03.229333 systemd-logind[1049]: Session 5 logged out. Waiting for processes to exit. Oct 2 20:33:03.232442 systemd-logind[1049]: Removed session 5. Oct 2 20:33:04.477090 sshd[1156]: Accepted publickey for core from 172.24.4.1 port 36588 ssh2: RSA SHA256:04DBnBUG6fFEh6RHhq3Vh04U7QWBRpG3v2XU9WHMKYg Oct 2 20:33:04.479931 sshd[1156]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:33:04.489427 systemd-logind[1049]: New session 6 of user core. Oct 2 20:33:04.489903 systemd[1]: Started session-6.scope. Oct 2 20:33:05.010222 sudo[1161]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 20:33:05.010743 sudo[1161]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:33:05.017483 sudo[1161]: pam_unix(sudo:session): session closed for user root Oct 2 20:33:05.028225 sudo[1160]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 20:33:05.029475 sudo[1160]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:33:05.050377 systemd[1]: Stopping audit-rules.service... Oct 2 20:33:05.051000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 20:33:05.055477 kernel: kauditd_printk_skb: 129 callbacks suppressed Oct 2 20:33:05.055629 kernel: audit: type=1305 audit(1696278785.051:174): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 20:33:05.051000 audit[1164]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd34342e50 a2=420 a3=0 items=0 ppid=1 pid=1164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:05.060783 auditctl[1164]: No rules Oct 2 20:33:05.062218 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 20:33:05.071471 kernel: audit: type=1300 audit(1696278785.051:174): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd34342e50 a2=420 a3=0 items=0 ppid=1 pid=1164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:05.062686 systemd[1]: Stopped audit-rules.service. Oct 2 20:33:05.072733 systemd[1]: Starting audit-rules.service... Oct 2 20:33:05.051000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 20:33:05.086057 kernel: audit: type=1327 audit(1696278785.051:174): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 20:33:05.086224 kernel: audit: type=1131 audit(1696278785.060:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:33:05.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:33:05.114623 augenrules[1181]: No rules Oct 2 20:33:05.116329 systemd[1]: Finished audit-rules.service. Oct 2 20:33:05.118792 sudo[1160]: pam_unix(sudo:session): session closed for user root Oct 2 20:33:05.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:33:05.117000 audit[1160]: USER_END pid=1160 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:33:05.128095 kernel: audit: type=1130 audit(1696278785.115:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:33:05.128337 kernel: audit: type=1106 audit(1696278785.117:177): pid=1160 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:33:05.128395 kernel: audit: type=1104 audit(1696278785.117:178): pid=1160 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:33:05.117000 audit[1160]: CRED_DISP pid=1160 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:33:05.303199 sshd[1156]: pam_unix(sshd:session): session closed for user core Oct 2 20:33:05.306000 audit[1156]: USER_END pid=1156 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:33:05.312285 systemd[1]: Started sshd@6-172.24.4.121:22-172.24.4.1:49490.service. Oct 2 20:33:05.326210 kernel: audit: type=1106 audit(1696278785.306:179): pid=1156 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:33:05.329986 kernel: audit: type=1104 audit(1696278785.306:180): pid=1156 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:33:05.306000 audit[1156]: CRED_DISP pid=1156 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:33:05.327306 systemd[1]: sshd@5-172.24.4.121:22-172.24.4.1:36588.service: Deactivated successfully. Oct 2 20:33:05.328830 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 20:33:05.336177 systemd-logind[1049]: Session 6 logged out. Waiting for processes to exit. Oct 2 20:33:05.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.121:22-172.24.4.1:49490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:33:05.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.24.4.121:22-172.24.4.1:36588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:33:05.349213 kernel: audit: type=1130 audit(1696278785.311:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.121:22-172.24.4.1:49490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:33:05.349530 systemd-logind[1049]: Removed session 6. Oct 2 20:33:06.558000 audit[1186]: USER_ACCT pid=1186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:33:06.559364 sshd[1186]: Accepted publickey for core from 172.24.4.1 port 49490 ssh2: RSA SHA256:04DBnBUG6fFEh6RHhq3Vh04U7QWBRpG3v2XU9WHMKYg Oct 2 20:33:06.559000 audit[1186]: CRED_ACQ pid=1186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:33:06.560000 audit[1186]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea4c82050 a2=3 a3=0 items=0 ppid=1 pid=1186 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:06.560000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 20:33:06.562278 sshd[1186]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:33:06.572763 systemd-logind[1049]: New session 7 of user core. Oct 2 20:33:06.573818 systemd[1]: Started session-7.scope. Oct 2 20:33:06.585000 audit[1186]: USER_START pid=1186 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:33:06.588000 audit[1189]: CRED_ACQ pid=1189 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:33:07.117000 audit[1190]: USER_ACCT pid=1190 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:33:07.118000 audit[1190]: CRED_REFR pid=1190 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:33:07.119022 sudo[1190]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 20:33:07.119558 sudo[1190]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:33:07.123000 audit[1190]: USER_START pid=1190 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:33:07.774852 systemd[1]: Reloading. Oct 2 20:33:07.939346 /usr/lib/systemd/system-generators/torcx-generator[1219]: time="2023-10-02T20:33:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 20:33:07.939378 /usr/lib/systemd/system-generators/torcx-generator[1219]: time="2023-10-02T20:33:07Z" level=info msg="torcx already run" Oct 2 20:33:08.017870 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 20:33:08.017894 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 20:33:08.043114 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 20:33:08.114000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.114000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.114000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.114000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.114000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.114000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.114000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.114000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.114000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.115000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.115000 audit: BPF prog-id=37 op=LOAD Oct 2 20:33:08.115000 audit: BPF prog-id=32 op=UNLOAD Oct 2 20:33:08.115000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.115000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.115000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.115000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.115000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.115000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.115000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.115000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.115000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.115000 audit: BPF prog-id=38 op=LOAD Oct 2 20:33:08.115000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.115000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.115000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.115000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.115000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.115000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.115000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.115000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.115000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.115000 audit: BPF prog-id=39 op=LOAD Oct 2 20:33:08.115000 audit: BPF prog-id=33 op=UNLOAD Oct 2 20:33:08.115000 audit: BPF prog-id=34 op=UNLOAD Oct 2 20:33:08.116000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.116000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.116000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.116000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.116000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.116000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.116000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.116000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.116000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.116000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.116000 audit: BPF prog-id=40 op=LOAD Oct 2 20:33:08.116000 audit: BPF prog-id=31 op=UNLOAD Oct 2 20:33:08.118000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.118000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.118000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.118000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.118000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.118000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.118000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.118000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.118000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.118000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.118000 audit: BPF prog-id=41 op=LOAD Oct 2 20:33:08.118000 audit: BPF prog-id=30 op=UNLOAD Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit: BPF prog-id=42 op=LOAD Oct 2 20:33:08.120000 audit: BPF prog-id=21 op=UNLOAD Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit: BPF prog-id=43 op=LOAD Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.120000 audit: BPF prog-id=44 op=LOAD Oct 2 20:33:08.120000 audit: BPF prog-id=22 op=UNLOAD Oct 2 20:33:08.120000 audit: BPF prog-id=23 op=UNLOAD Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit: BPF prog-id=45 op=LOAD Oct 2 20:33:08.121000 audit: BPF prog-id=27 op=UNLOAD Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit: BPF prog-id=46 op=LOAD Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.121000 audit: BPF prog-id=47 op=LOAD Oct 2 20:33:08.121000 audit: BPF prog-id=28 op=UNLOAD Oct 2 20:33:08.121000 audit: BPF prog-id=29 op=UNLOAD Oct 2 20:33:08.122000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.122000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.122000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.122000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.122000 audit: BPF prog-id=48 op=LOAD Oct 2 20:33:08.122000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.122000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.122000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.122000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.122000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.122000 audit: BPF prog-id=49 op=LOAD Oct 2 20:33:08.122000 audit: BPF prog-id=24 op=UNLOAD Oct 2 20:33:08.122000 audit: BPF prog-id=25 op=UNLOAD Oct 2 20:33:08.123000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.123000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.124000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.124000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.124000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.124000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.124000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.124000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.124000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.124000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.124000 audit: BPF prog-id=50 op=LOAD Oct 2 20:33:08.124000 audit: BPF prog-id=26 op=UNLOAD Oct 2 20:33:08.125000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.125000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.125000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.125000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.125000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.125000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.125000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.125000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.125000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.125000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:08.125000 audit: BPF prog-id=51 op=LOAD Oct 2 20:33:08.125000 audit: BPF prog-id=35 op=UNLOAD Oct 2 20:33:08.149558 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 20:33:08.164064 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 20:33:08.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:33:08.165531 systemd[1]: Reached target network-online.target. Oct 2 20:33:08.168948 systemd[1]: Started kubelet.service. Oct 2 20:33:08.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:33:08.193650 systemd[1]: Starting coreos-metadata.service... Oct 2 20:33:08.255759 coreos-metadata[1274]: Oct 02 20:33:08.255 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 2 20:33:08.263092 kubelet[1267]: E1002 20:33:08.263036 1267 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 2 20:33:08.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 20:33:08.265276 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 20:33:08.265408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 20:33:08.586929 coreos-metadata[1274]: Oct 02 20:33:08.586 INFO Fetch successful Oct 2 20:33:08.586929 coreos-metadata[1274]: Oct 02 20:33:08.586 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Oct 2 20:33:08.603977 coreos-metadata[1274]: Oct 02 20:33:08.603 INFO Fetch successful Oct 2 20:33:08.603977 coreos-metadata[1274]: Oct 02 20:33:08.603 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Oct 2 20:33:08.618090 coreos-metadata[1274]: Oct 02 20:33:08.618 INFO Fetch successful Oct 2 20:33:08.619312 coreos-metadata[1274]: Oct 02 20:33:08.619 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Oct 2 20:33:08.633937 coreos-metadata[1274]: Oct 02 20:33:08.633 INFO Fetch successful Oct 2 20:33:08.633937 coreos-metadata[1274]: Oct 02 20:33:08.633 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Oct 2 20:33:08.649900 coreos-metadata[1274]: Oct 02 20:33:08.649 INFO Fetch successful Oct 2 20:33:08.666589 systemd[1]: Finished coreos-metadata.service. Oct 2 20:33:08.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:33:09.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:33:09.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:33:09.411177 systemd[1]: Stopped kubelet.service. Oct 2 20:33:09.454713 systemd[1]: Reloading. Oct 2 20:33:09.587292 /usr/lib/systemd/system-generators/torcx-generator[1336]: time="2023-10-02T20:33:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 20:33:09.587698 /usr/lib/systemd/system-generators/torcx-generator[1336]: time="2023-10-02T20:33:09Z" level=info msg="torcx already run" Oct 2 20:33:09.665428 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 20:33:09.665659 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 20:33:09.693391 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 20:33:09.763000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.763000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.764000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.764000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.764000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.764000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.764000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.764000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.764000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.765000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.765000 audit: BPF prog-id=52 op=LOAD Oct 2 20:33:09.765000 audit: BPF prog-id=37 op=UNLOAD Oct 2 20:33:09.765000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.765000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.765000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.765000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.765000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.765000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.765000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.765000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.766000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.766000 audit: BPF prog-id=53 op=LOAD Oct 2 20:33:09.766000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.766000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.766000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.766000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.766000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.766000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.766000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.766000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.766000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.766000 audit: BPF prog-id=54 op=LOAD Oct 2 20:33:09.766000 audit: BPF prog-id=38 op=UNLOAD Oct 2 20:33:09.766000 audit: BPF prog-id=39 op=UNLOAD Oct 2 20:33:09.767000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.767000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.767000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.767000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.767000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.767000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.767000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.767000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.767000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.767000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.767000 audit: BPF prog-id=55 op=LOAD Oct 2 20:33:09.767000 audit: BPF prog-id=40 op=UNLOAD Oct 2 20:33:09.769000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.769000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.769000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.769000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.769000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.769000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.769000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.769000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.769000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.770000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.770000 audit: BPF prog-id=56 op=LOAD Oct 2 20:33:09.770000 audit: BPF prog-id=41 op=UNLOAD Oct 2 20:33:09.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.771000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.771000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.772000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.772000 audit: BPF prog-id=57 op=LOAD Oct 2 20:33:09.772000 audit: BPF prog-id=42 op=UNLOAD Oct 2 20:33:09.772000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.772000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.772000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.772000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.772000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.772000 audit: BPF prog-id=58 op=LOAD Oct 2 20:33:09.773000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.773000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.773000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.773000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.773000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.773000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.773000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.773000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.773000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.773000 audit: BPF prog-id=59 op=LOAD Oct 2 20:33:09.773000 audit: BPF prog-id=43 op=UNLOAD Oct 2 20:33:09.773000 audit: BPF prog-id=44 op=UNLOAD Oct 2 20:33:09.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.774000 audit: BPF prog-id=60 op=LOAD Oct 2 20:33:09.774000 audit: BPF prog-id=45 op=UNLOAD Oct 2 20:33:09.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.774000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.774000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.775000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.775000 audit: BPF prog-id=61 op=LOAD Oct 2 20:33:09.775000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.775000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.775000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.775000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.775000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.775000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.775000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.775000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.775000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.775000 audit: BPF prog-id=62 op=LOAD Oct 2 20:33:09.775000 audit: BPF prog-id=46 op=UNLOAD Oct 2 20:33:09.775000 audit: BPF prog-id=47 op=UNLOAD Oct 2 20:33:09.776000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.777000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.777000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.777000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.777000 audit: BPF prog-id=63 op=LOAD Oct 2 20:33:09.777000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.777000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.777000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.777000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.777000 audit: BPF prog-id=64 op=LOAD Oct 2 20:33:09.778000 audit: BPF prog-id=48 op=UNLOAD Oct 2 20:33:09.778000 audit: BPF prog-id=49 op=UNLOAD Oct 2 20:33:09.779000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.779000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.779000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.779000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.779000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.779000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.779000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.779000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.779000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.779000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.779000 audit: BPF prog-id=65 op=LOAD Oct 2 20:33:09.780000 audit: BPF prog-id=50 op=UNLOAD Oct 2 20:33:09.781000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.781000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.781000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.781000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.781000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.781000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.781000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.781000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.781000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:09.782000 audit: BPF prog-id=66 op=LOAD Oct 2 20:33:09.782000 audit: BPF prog-id=51 op=UNLOAD Oct 2 20:33:09.806482 systemd[1]: Started kubelet.service. Oct 2 20:33:09.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:33:09.915163 kubelet[1380]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 20:33:09.915163 kubelet[1380]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 2 20:33:09.915163 kubelet[1380]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 20:33:09.916371 kubelet[1380]: I1002 20:33:09.915176 1380 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 20:33:10.467879 kubelet[1380]: I1002 20:33:10.467820 1380 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Oct 2 20:33:10.467997 kubelet[1380]: I1002 20:33:10.467895 1380 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 20:33:10.468533 kubelet[1380]: I1002 20:33:10.468497 1380 server.go:895] "Client rotation is on, will bootstrap in background" Oct 2 20:33:10.472670 kubelet[1380]: I1002 20:33:10.472647 1380 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 20:33:10.484700 kubelet[1380]: I1002 20:33:10.484671 1380 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 20:33:10.485139 kubelet[1380]: I1002 20:33:10.485093 1380 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 20:33:10.485550 kubelet[1380]: I1002 20:33:10.485509 1380 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 2 20:33:10.485655 kubelet[1380]: I1002 20:33:10.485568 1380 topology_manager.go:138] "Creating topology manager with none policy" Oct 2 20:33:10.485655 kubelet[1380]: I1002 20:33:10.485595 1380 container_manager_linux.go:301] "Creating device plugin manager" Oct 2 20:33:10.485813 kubelet[1380]: I1002 20:33:10.485780 1380 state_mem.go:36] "Initialized new in-memory state store" Oct 2 20:33:10.485997 kubelet[1380]: I1002 20:33:10.485968 1380 kubelet.go:393] "Attempting to sync node with API server" Oct 2 20:33:10.486051 kubelet[1380]: I1002 20:33:10.486010 1380 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 20:33:10.486081 kubelet[1380]: I1002 20:33:10.486057 1380 kubelet.go:309] "Adding apiserver pod source" Oct 2 20:33:10.486109 kubelet[1380]: I1002 20:33:10.486090 1380 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 20:33:10.486601 kubelet[1380]: E1002 20:33:10.486583 1380 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:10.487394 kubelet[1380]: E1002 20:33:10.487308 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:10.487527 kubelet[1380]: I1002 20:33:10.487496 1380 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 20:33:10.487864 kubelet[1380]: W1002 20:33:10.487850 1380 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 20:33:10.488870 kubelet[1380]: I1002 20:33:10.488856 1380 server.go:1232] "Started kubelet" Oct 2 20:33:10.489960 kubelet[1380]: I1002 20:33:10.489924 1380 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Oct 2 20:33:10.490551 kubelet[1380]: I1002 20:33:10.490504 1380 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 2 20:33:10.492696 kubelet[1380]: I1002 20:33:10.492654 1380 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 20:33:10.499601 kernel: kauditd_printk_skb: 362 callbacks suppressed Oct 2 20:33:10.499661 kernel: audit: type=1400 audit(1696278790.491:542): avc: denied { mac_admin } for pid=1380 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:10.499694 kernel: audit: type=1401 audit(1696278790.491:542): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:33:10.491000 audit[1380]: AVC avc: denied { mac_admin } for pid=1380 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:10.491000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:33:10.499808 kubelet[1380]: I1002 20:33:10.495728 1380 server.go:462] "Adding debug handlers to kubelet server" Oct 2 20:33:10.499937 kubelet[1380]: I1002 20:33:10.499917 1380 kubelet.go:1386] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 20:33:10.500039 kubelet[1380]: I1002 20:33:10.500022 1380 kubelet.go:1390] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 20:33:10.500211 kubelet[1380]: I1002 20:33:10.500199 1380 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 20:33:10.491000 audit[1380]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b6d4d0 a1=c000b6e9c0 a2=c000b6d4a0 a3=25 items=0 ppid=1 pid=1380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:10.510837 kernel: audit: type=1300 audit(1696278790.491:542): arch=c000003e syscall=188 success=no exit=-22 a0=c000b6d4d0 a1=c000b6e9c0 a2=c000b6d4a0 a3=25 items=0 ppid=1 pid=1380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:10.510880 kernel: audit: type=1327 audit(1696278790.491:542): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:33:10.491000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:33:10.498000 audit[1380]: AVC avc: denied { mac_admin } for pid=1380 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:10.516874 kubelet[1380]: E1002 20:33:10.514408 1380 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.121.178a64929b1db8da", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.121", UID:"172.24.4.121", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.121"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 488832218, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 488832218, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.121"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:33:10.517147 kernel: audit: type=1400 audit(1696278790.498:543): avc: denied { mac_admin } for pid=1380 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:10.521310 kubelet[1380]: W1002 20:33:10.521285 1380 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.24.4.121" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:33:10.521503 kubelet[1380]: E1002 20:33:10.521491 1380 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.121" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:33:10.521616 kubelet[1380]: W1002 20:33:10.521603 1380 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:33:10.521683 kubelet[1380]: E1002 20:33:10.521674 1380 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:33:10.498000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:33:10.526276 kernel: audit: type=1401 audit(1696278790.498:543): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:33:10.526623 kubelet[1380]: E1002 20:33:10.526605 1380 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.24.4.121\" not found" Oct 2 20:33:10.526772 kubelet[1380]: I1002 20:33:10.526759 1380 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 2 20:33:10.526940 kubelet[1380]: I1002 20:33:10.526927 1380 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 2 20:33:10.527077 kubelet[1380]: I1002 20:33:10.527065 1380 reconciler_new.go:29] "Reconciler: start to sync state" Oct 2 20:33:10.498000 audit[1380]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ca61a0 a1=c000b6e9d8 a2=c000b6d560 a3=25 items=0 ppid=1 pid=1380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:10.533147 kernel: audit: type=1300 audit(1696278790.498:543): arch=c000003e syscall=188 success=no exit=-22 a0=c000ca61a0 a1=c000b6e9d8 a2=c000b6d560 a3=25 items=0 ppid=1 pid=1380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:10.533439 kubelet[1380]: E1002 20:33:10.533413 1380 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 20:33:10.533525 kubelet[1380]: E1002 20:33:10.533514 1380 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 20:33:10.498000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:33:10.541414 kernel: audit: type=1327 audit(1696278790.498:543): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:33:10.546783 kubelet[1380]: W1002 20:33:10.546749 1380 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:33:10.546982 kubelet[1380]: E1002 20:33:10.546961 1380 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:33:10.547332 kubelet[1380]: E1002 20:33:10.547204 1380 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.121.178a64929dc753b4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.121", UID:"172.24.4.121", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.121"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 533501876, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 533501876, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.121"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:33:10.547670 kubelet[1380]: E1002 20:33:10.547646 1380 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.121\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Oct 2 20:33:10.566000 audit[1393]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1393 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:10.566000 audit[1393]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff835dd920 a2=0 a3=7fff835dd90c items=0 ppid=1380 pid=1393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:10.571366 kubelet[1380]: I1002 20:33:10.571348 1380 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 20:33:10.571490 kubelet[1380]: I1002 20:33:10.571479 1380 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 20:33:10.571580 kubelet[1380]: I1002 20:33:10.571571 1380 state_mem.go:36] "Initialized new in-memory state store" Oct 2 20:33:10.577173 kernel: audit: type=1325 audit(1696278790.566:544): table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1393 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:10.577240 kernel: audit: type=1300 audit(1696278790.566:544): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff835dd920 a2=0 a3=7fff835dd90c items=0 ppid=1380 pid=1393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:10.566000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 20:33:10.577752 kubelet[1380]: E1002 20:33:10.577612 1380 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.121.178a64929ff798fa", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.121", UID:"172.24.4.121", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.121 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.121"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 570219770, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 570219770, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.121"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:33:10.578261 kubelet[1380]: I1002 20:33:10.578249 1380 policy_none.go:49] "None policy: Start" Oct 2 20:33:10.579041 kubelet[1380]: I1002 20:33:10.579030 1380 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 20:33:10.579152 kubelet[1380]: I1002 20:33:10.579141 1380 state_mem.go:35] "Initializing new in-memory state store" Oct 2 20:33:10.579824 kubelet[1380]: E1002 20:33:10.579705 1380 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.121.178a64929ff7ae08", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.121", UID:"172.24.4.121", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.121 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.121"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 570225160, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 570225160, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.121"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:33:10.581384 kubelet[1380]: E1002 20:33:10.581308 1380 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.121.178a64929ff7ba3e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.121", UID:"172.24.4.121", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.121 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.121"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 570228286, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 570228286, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.121"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:33:10.580000 audit[1398]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1398 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:10.580000 audit[1398]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffc29bcd2a0 a2=0 a3=7ffc29bcd28c items=0 ppid=1380 pid=1398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:10.580000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 20:33:10.585637 systemd[1]: Created slice kubepods.slice. Oct 2 20:33:10.591463 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 20:33:10.594728 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 20:33:10.596063 kubelet[1380]: W1002 20:33:10.596043 1380 helpers.go:242] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective: no such device Oct 2 20:33:10.599930 kubelet[1380]: I1002 20:33:10.599901 1380 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 20:33:10.598000 audit[1380]: AVC avc: denied { mac_admin } for pid=1380 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:10.598000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:33:10.598000 audit[1380]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c14930 a1=c000c17650 a2=c000c14900 a3=25 items=0 ppid=1 pid=1380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:10.598000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:33:10.602046 kubelet[1380]: I1002 20:33:10.601955 1380 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 20:33:10.602809 kubelet[1380]: E1002 20:33:10.602795 1380 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.121\" not found" Oct 2 20:33:10.602966 kubelet[1380]: I1002 20:33:10.602938 1380 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 20:33:10.603906 kubelet[1380]: E1002 20:33:10.603837 1380 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.121.178a6492a1d72a4b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.121", UID:"172.24.4.121", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.121"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 601648715, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 601648715, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.121"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:33:10.592000 audit[1400]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1400 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:10.592000 audit[1400]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdb44e0920 a2=0 a3=7ffdb44e090c items=0 ppid=1380 pid=1400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:10.592000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 20:33:10.613000 audit[1405]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1405 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:10.613000 audit[1405]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffcad87f50 a2=0 a3=7fffcad87f3c items=0 ppid=1380 pid=1405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:10.613000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 20:33:10.628885 kubelet[1380]: I1002 20:33:10.628839 1380 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.121" Oct 2 20:33:10.630607 kubelet[1380]: E1002 20:33:10.630583 1380 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.121" Oct 2 20:33:10.634434 kubelet[1380]: E1002 20:33:10.634344 1380 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.121.178a64929ff798fa", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.121", UID:"172.24.4.121", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.121 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.121"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 570219770, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 628756205, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.121"}': 'events "172.24.4.121.178a64929ff798fa" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:33:10.635705 kubelet[1380]: E1002 20:33:10.635623 1380 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.121.178a64929ff7ae08", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.121", UID:"172.24.4.121", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.121 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.121"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 570225160, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 628766765, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.121"}': 'events "172.24.4.121.178a64929ff7ae08" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:33:10.638139 kubelet[1380]: E1002 20:33:10.637602 1380 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.121.178a64929ff7ba3e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.121", UID:"172.24.4.121", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.121 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.121"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 570228286, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 628785430, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.121"}': 'events "172.24.4.121.178a64929ff7ba3e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:33:10.669000 audit[1410]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1410 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:10.669000 audit[1410]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcb00589e0 a2=0 a3=7ffcb00589cc items=0 ppid=1380 pid=1410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:10.669000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 20:33:10.671173 kubelet[1380]: I1002 20:33:10.671142 1380 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 2 20:33:10.671000 audit[1412]: NETFILTER_CFG table=mangle:7 family=2 entries=1 op=nft_register_chain pid=1412 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:10.671000 audit[1412]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd6957bc60 a2=0 a3=7ffd6957bc4c items=0 ppid=1380 pid=1412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:10.671000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 20:33:10.672000 audit[1411]: NETFILTER_CFG table=mangle:8 family=10 entries=2 op=nft_register_chain pid=1411 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:10.672000 audit[1411]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdb4beb700 a2=0 a3=7ffdb4beb6ec items=0 ppid=1380 pid=1411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:10.672000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 20:33:10.673000 audit[1413]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1413 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:10.673000 audit[1413]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffc51e49a00 a2=0 a3=7ffc51e499ec items=0 ppid=1380 pid=1413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:10.673000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 20:33:10.674547 kubelet[1380]: I1002 20:33:10.674532 1380 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 2 20:33:10.674633 kubelet[1380]: I1002 20:33:10.674623 1380 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 2 20:33:10.674819 kubelet[1380]: I1002 20:33:10.674807 1380 kubelet.go:2303] "Starting kubelet main sync loop" Oct 2 20:33:10.674952 kubelet[1380]: E1002 20:33:10.674939 1380 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 20:33:10.675000 audit[1414]: NETFILTER_CFG table=mangle:10 family=10 entries=1 op=nft_register_chain pid=1414 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:10.675000 audit[1414]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef72b2ed0 a2=0 a3=7ffef72b2ebc items=0 ppid=1380 pid=1414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:10.675000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 20:33:10.677220 kubelet[1380]: W1002 20:33:10.677175 1380 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:33:10.677271 kubelet[1380]: E1002 20:33:10.677236 1380 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:33:10.676000 audit[1415]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=1415 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:10.676000 audit[1415]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd78256340 a2=0 a3=7ffd7825632c items=0 ppid=1380 pid=1415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:10.676000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 20:33:10.678000 audit[1416]: NETFILTER_CFG table=nat:12 family=10 entries=2 op=nft_register_chain pid=1416 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:10.678000 audit[1416]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffe3acfac40 a2=0 a3=7ffe3acfac2c items=0 ppid=1380 pid=1416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:10.678000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 20:33:10.680000 audit[1417]: NETFILTER_CFG table=filter:13 family=10 entries=2 op=nft_register_chain pid=1417 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:10.680000 audit[1417]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe3d9d75d0 a2=0 a3=7ffe3d9d75bc items=0 ppid=1380 pid=1417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:10.680000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 20:33:10.753810 kubelet[1380]: E1002 20:33:10.750872 1380 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.121\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Oct 2 20:33:10.832604 kubelet[1380]: I1002 20:33:10.832540 1380 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.121" Oct 2 20:33:10.835244 kubelet[1380]: E1002 20:33:10.835043 1380 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.121.178a64929ff798fa", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.121", UID:"172.24.4.121", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.121 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.121"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 570219770, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 832456365, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.121"}': 'events "172.24.4.121.178a64929ff798fa" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:33:10.835726 kubelet[1380]: E1002 20:33:10.835689 1380 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.121" Oct 2 20:33:10.836744 kubelet[1380]: E1002 20:33:10.836635 1380 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.121.178a64929ff7ae08", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.121", UID:"172.24.4.121", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.121 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.121"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 570225160, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 832467946, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.121"}': 'events "172.24.4.121.178a64929ff7ae08" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:33:10.838450 kubelet[1380]: E1002 20:33:10.838339 1380 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.121.178a64929ff7ba3e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.121", UID:"172.24.4.121", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.121 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.121"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 570228286, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 832475150, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.121"}': 'events "172.24.4.121.178a64929ff7ba3e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:33:11.158545 kubelet[1380]: E1002 20:33:11.157457 1380 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.24.4.121\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Oct 2 20:33:11.238007 kubelet[1380]: I1002 20:33:11.237964 1380 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.121" Oct 2 20:33:11.239643 kubelet[1380]: E1002 20:33:11.239488 1380 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.121.178a64929ff798fa", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.121", UID:"172.24.4.121", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.121 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.121"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 570219770, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 33, 11, 237287706, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.121"}': 'events "172.24.4.121.178a64929ff798fa" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:33:11.240104 kubelet[1380]: E1002 20:33:11.240031 1380 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.121" Oct 2 20:33:11.242026 kubelet[1380]: E1002 20:33:11.241910 1380 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.121.178a64929ff7ae08", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.121", UID:"172.24.4.121", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.121 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.121"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 570225160, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 33, 11, 237330636, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.121"}': 'events "172.24.4.121.178a64929ff7ae08" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:33:11.244599 kubelet[1380]: E1002 20:33:11.244429 1380 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.121.178a64929ff7ba3e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.121", UID:"172.24.4.121", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.121 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.121"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 33, 10, 570228286, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 33, 11, 237337078, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.24.4.121"}': 'events "172.24.4.121.178a64929ff7ba3e" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:33:11.472933 kubelet[1380]: I1002 20:33:11.471953 1380 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 20:33:11.488589 kubelet[1380]: E1002 20:33:11.488472 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:11.900955 kubelet[1380]: E1002 20:33:11.900494 1380 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.24.4.121" not found Oct 2 20:33:11.967579 kubelet[1380]: E1002 20:33:11.967526 1380 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.121\" not found" node="172.24.4.121" Oct 2 20:33:12.041737 kubelet[1380]: I1002 20:33:12.041670 1380 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.121" Oct 2 20:33:12.050569 kubelet[1380]: I1002 20:33:12.050470 1380 kubelet_node_status.go:73] "Successfully registered node" node="172.24.4.121" Oct 2 20:33:12.195917 kubelet[1380]: I1002 20:33:12.195334 1380 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 20:33:12.196610 env[1057]: time="2023-10-02T20:33:12.195662888Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 20:33:12.197545 kubelet[1380]: I1002 20:33:12.196466 1380 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 20:33:12.335000 audit[1190]: USER_END pid=1190 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:33:12.336585 sudo[1190]: pam_unix(sudo:session): session closed for user root Oct 2 20:33:12.337000 audit[1190]: CRED_DISP pid=1190 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:33:12.488683 kubelet[1380]: I1002 20:33:12.488383 1380 apiserver.go:52] "Watching apiserver" Oct 2 20:33:12.489732 kubelet[1380]: E1002 20:33:12.489646 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:12.494873 kubelet[1380]: I1002 20:33:12.494838 1380 topology_manager.go:215] "Topology Admit Handler" podUID="fe5d3995-e363-455c-8f28-03f29dbc9698" podNamespace="kube-system" podName="kube-proxy-n99dg" Oct 2 20:33:12.495281 kubelet[1380]: I1002 20:33:12.495247 1380 topology_manager.go:215] "Topology Admit Handler" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" podNamespace="kube-system" podName="cilium-pxpmm" Oct 2 20:33:12.511983 systemd[1]: Created slice kubepods-burstable-pod6258f96c_67ee_4076_9b7a_b023abccd2f8.slice. Oct 2 20:33:12.528414 kubelet[1380]: I1002 20:33:12.528033 1380 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 2 20:33:12.530589 systemd[1]: Created slice kubepods-besteffort-podfe5d3995_e363_455c_8f28_03f29dbc9698.slice. Oct 2 20:33:12.540363 kubelet[1380]: I1002 20:33:12.540325 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-cni-path\") pod \"cilium-pxpmm\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " pod="kube-system/cilium-pxpmm" Oct 2 20:33:12.540744 kubelet[1380]: I1002 20:33:12.540710 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-lib-modules\") pod \"cilium-pxpmm\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " pod="kube-system/cilium-pxpmm" Oct 2 20:33:12.541389 kubelet[1380]: I1002 20:33:12.541359 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-xtables-lock\") pod \"cilium-pxpmm\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " pod="kube-system/cilium-pxpmm" Oct 2 20:33:12.542062 kubelet[1380]: I1002 20:33:12.541990 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-host-proc-sys-kernel\") pod \"cilium-pxpmm\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " pod="kube-system/cilium-pxpmm" Oct 2 20:33:12.542669 kubelet[1380]: I1002 20:33:12.542641 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6258f96c-67ee-4076-9b7a-b023abccd2f8-hubble-tls\") pod \"cilium-pxpmm\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " pod="kube-system/cilium-pxpmm" Oct 2 20:33:12.543320 kubelet[1380]: I1002 20:33:12.543058 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l6qf\" (UniqueName: \"kubernetes.io/projected/6258f96c-67ee-4076-9b7a-b023abccd2f8-kube-api-access-8l6qf\") pod \"cilium-pxpmm\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " pod="kube-system/cilium-pxpmm" Oct 2 20:33:12.543637 kubelet[1380]: I1002 20:33:12.543610 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe5d3995-e363-455c-8f28-03f29dbc9698-xtables-lock\") pod \"kube-proxy-n99dg\" (UID: \"fe5d3995-e363-455c-8f28-03f29dbc9698\") " pod="kube-system/kube-proxy-n99dg" Oct 2 20:33:12.543945 kubelet[1380]: I1002 20:33:12.543913 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-cilium-run\") pod \"cilium-pxpmm\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " pod="kube-system/cilium-pxpmm" Oct 2 20:33:12.544355 kubelet[1380]: I1002 20:33:12.544327 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-host-proc-sys-net\") pod \"cilium-pxpmm\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " pod="kube-system/cilium-pxpmm" Oct 2 20:33:12.544666 kubelet[1380]: I1002 20:33:12.544640 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt242\" (UniqueName: \"kubernetes.io/projected/fe5d3995-e363-455c-8f28-03f29dbc9698-kube-api-access-bt242\") pod \"kube-proxy-n99dg\" (UID: \"fe5d3995-e363-455c-8f28-03f29dbc9698\") " pod="kube-system/kube-proxy-n99dg" Oct 2 20:33:12.544994 kubelet[1380]: I1002 20:33:12.544967 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe5d3995-e363-455c-8f28-03f29dbc9698-lib-modules\") pod \"kube-proxy-n99dg\" (UID: \"fe5d3995-e363-455c-8f28-03f29dbc9698\") " pod="kube-system/kube-proxy-n99dg" Oct 2 20:33:12.545424 kubelet[1380]: I1002 20:33:12.545335 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-cilium-cgroup\") pod \"cilium-pxpmm\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " pod="kube-system/cilium-pxpmm" Oct 2 20:33:12.545938 kubelet[1380]: I1002 20:33:12.545810 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6258f96c-67ee-4076-9b7a-b023abccd2f8-clustermesh-secrets\") pod \"cilium-pxpmm\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " pod="kube-system/cilium-pxpmm" Oct 2 20:33:12.546417 kubelet[1380]: I1002 20:33:12.546388 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fe5d3995-e363-455c-8f28-03f29dbc9698-kube-proxy\") pod \"kube-proxy-n99dg\" (UID: \"fe5d3995-e363-455c-8f28-03f29dbc9698\") " pod="kube-system/kube-proxy-n99dg" Oct 2 20:33:12.546793 kubelet[1380]: I1002 20:33:12.546690 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-hostproc\") pod \"cilium-pxpmm\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " pod="kube-system/cilium-pxpmm" Oct 2 20:33:12.547185 kubelet[1380]: I1002 20:33:12.547045 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-etc-cni-netd\") pod \"cilium-pxpmm\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " pod="kube-system/cilium-pxpmm" Oct 2 20:33:12.547576 kubelet[1380]: I1002 20:33:12.547470 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6258f96c-67ee-4076-9b7a-b023abccd2f8-cilium-config-path\") pod \"cilium-pxpmm\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " pod="kube-system/cilium-pxpmm" Oct 2 20:33:12.547909 kubelet[1380]: I1002 20:33:12.547882 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-bpf-maps\") pod \"cilium-pxpmm\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " pod="kube-system/cilium-pxpmm" Oct 2 20:33:12.564714 sshd[1186]: pam_unix(sshd:session): session closed for user core Oct 2 20:33:12.567000 audit[1186]: USER_END pid=1186 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:33:12.567000 audit[1186]: CRED_DISP pid=1186 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:33:12.571691 systemd[1]: sshd@6-172.24.4.121:22-172.24.4.1:49490.service: Deactivated successfully. Oct 2 20:33:12.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.121:22-172.24.4.1:49490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:33:12.573431 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 20:33:12.574914 systemd-logind[1049]: Session 7 logged out. Waiting for processes to exit. Oct 2 20:33:12.577389 systemd-logind[1049]: Removed session 7. Oct 2 20:33:12.842741 env[1057]: time="2023-10-02T20:33:12.834121337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pxpmm,Uid:6258f96c-67ee-4076-9b7a-b023abccd2f8,Namespace:kube-system,Attempt:0,}" Oct 2 20:33:12.843780 env[1057]: time="2023-10-02T20:33:12.843665477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n99dg,Uid:fe5d3995-e363-455c-8f28-03f29dbc9698,Namespace:kube-system,Attempt:0,}" Oct 2 20:33:13.491092 kubelet[1380]: E1002 20:33:13.491000 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:13.658176 env[1057]: time="2023-10-02T20:33:13.658058651Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:33:13.661255 env[1057]: time="2023-10-02T20:33:13.661194482Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:33:13.665840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1013599443.mount: Deactivated successfully. Oct 2 20:33:13.669401 env[1057]: time="2023-10-02T20:33:13.669303190Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:33:13.681422 env[1057]: time="2023-10-02T20:33:13.681261097Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:33:13.688950 env[1057]: time="2023-10-02T20:33:13.688877241Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:33:13.692813 env[1057]: time="2023-10-02T20:33:13.692732412Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:33:13.694713 env[1057]: time="2023-10-02T20:33:13.694631093Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:33:13.700263 env[1057]: time="2023-10-02T20:33:13.700099418Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:33:13.738449 env[1057]: time="2023-10-02T20:33:13.726180494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:33:13.738449 env[1057]: time="2023-10-02T20:33:13.726235627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:33:13.738449 env[1057]: time="2023-10-02T20:33:13.726254051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:33:13.738449 env[1057]: time="2023-10-02T20:33:13.726475878Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc pid=1434 runtime=io.containerd.runc.v2 Oct 2 20:33:13.742744 env[1057]: time="2023-10-02T20:33:13.741639477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:33:13.742744 env[1057]: time="2023-10-02T20:33:13.741711342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:33:13.742744 env[1057]: time="2023-10-02T20:33:13.741739845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:33:13.742744 env[1057]: time="2023-10-02T20:33:13.741963615Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8213b6215d4f0663361d161c57e5bfa9b568721ef78d2ddf222330ec48962ef4 pid=1450 runtime=io.containerd.runc.v2 Oct 2 20:33:13.764348 systemd[1]: Started cri-containerd-8213b6215d4f0663361d161c57e5bfa9b568721ef78d2ddf222330ec48962ef4.scope. Oct 2 20:33:13.780000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.780000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.780000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.780000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.780000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.780000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.780000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.780000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.780000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.780000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.781000 audit: BPF prog-id=67 op=LOAD Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { bpf } for pid=1463 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=1450 pid=1463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:13.782000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832313362363231356434663036363333363164313631633537653562 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { perfmon } for pid=1463 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=1450 pid=1463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:13.782000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832313362363231356434663036363333363164313631633537653562 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { bpf } for pid=1463 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { bpf } for pid=1463 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { bpf } for pid=1463 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { perfmon } for pid=1463 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { perfmon } for pid=1463 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { perfmon } for pid=1463 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { perfmon } for pid=1463 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { perfmon } for pid=1463 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { bpf } for pid=1463 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { bpf } for pid=1463 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit: BPF prog-id=68 op=LOAD Oct 2 20:33:13.782000 audit[1463]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c000386a90 items=0 ppid=1450 pid=1463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:13.782000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832313362363231356434663036363333363164313631633537653562 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { bpf } for pid=1463 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { bpf } for pid=1463 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { perfmon } for pid=1463 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { perfmon } for pid=1463 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { perfmon } for pid=1463 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { perfmon } for pid=1463 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { perfmon } for pid=1463 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { bpf } for pid=1463 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { bpf } for pid=1463 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit: BPF prog-id=69 op=LOAD Oct 2 20:33:13.782000 audit[1463]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c000386ad8 items=0 ppid=1450 pid=1463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:13.782000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832313362363231356434663036363333363164313631633537653562 Oct 2 20:33:13.782000 audit: BPF prog-id=69 op=UNLOAD Oct 2 20:33:13.782000 audit: BPF prog-id=68 op=UNLOAD Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { bpf } for pid=1463 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { bpf } for pid=1463 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { bpf } for pid=1463 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { perfmon } for pid=1463 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { perfmon } for pid=1463 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { perfmon } for pid=1463 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { perfmon } for pid=1463 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { perfmon } for pid=1463 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { bpf } for pid=1463 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit[1463]: AVC avc: denied { bpf } for pid=1463 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.782000 audit: BPF prog-id=70 op=LOAD Oct 2 20:33:13.782000 audit[1463]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c000386ee8 items=0 ppid=1450 pid=1463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:13.782000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832313362363231356434663036363333363164313631633537653562 Oct 2 20:33:13.797879 systemd[1]: Started cri-containerd-88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc.scope. Oct 2 20:33:13.810378 env[1057]: time="2023-10-02T20:33:13.810308516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n99dg,Uid:fe5d3995-e363-455c-8f28-03f29dbc9698,Namespace:kube-system,Attempt:0,} returns sandbox id \"8213b6215d4f0663361d161c57e5bfa9b568721ef78d2ddf222330ec48962ef4\"" Oct 2 20:33:13.814261 env[1057]: time="2023-10-02T20:33:13.814227405Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.2\"" Oct 2 20:33:13.814000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.814000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.814000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.814000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.814000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.814000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.814000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.814000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.814000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.815000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.815000 audit: BPF prog-id=71 op=LOAD Oct 2 20:33:13.816000 audit[1474]: AVC avc: denied { bpf } for pid=1474 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.816000 audit[1474]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1434 pid=1474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:13.816000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838363531646365313935653463356164323965356633336364646363 Oct 2 20:33:13.816000 audit[1474]: AVC avc: denied { perfmon } for pid=1474 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.816000 audit[1474]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=1434 pid=1474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:13.816000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838363531646365313935653463356164323965356633336364646363 Oct 2 20:33:13.816000 audit[1474]: AVC avc: denied { bpf } for pid=1474 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.816000 audit[1474]: AVC avc: denied { bpf } for pid=1474 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.816000 audit[1474]: AVC avc: denied { bpf } for pid=1474 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.816000 audit[1474]: AVC avc: denied { perfmon } for pid=1474 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.816000 audit[1474]: AVC avc: denied { perfmon } for pid=1474 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.816000 audit[1474]: AVC avc: denied { perfmon } for pid=1474 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.816000 audit[1474]: AVC avc: denied { perfmon } for pid=1474 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.816000 audit[1474]: AVC avc: denied { perfmon } for pid=1474 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.816000 audit[1474]: AVC avc: denied { bpf } for pid=1474 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.816000 audit[1474]: AVC avc: denied { bpf } for pid=1474 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.816000 audit: BPF prog-id=72 op=LOAD Oct 2 20:33:13.816000 audit[1474]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000298c60 items=0 ppid=1434 pid=1474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:13.816000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838363531646365313935653463356164323965356633336364646363 Oct 2 20:33:13.817000 audit[1474]: AVC avc: denied { bpf } for pid=1474 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.817000 audit[1474]: AVC avc: denied { bpf } for pid=1474 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.817000 audit[1474]: AVC avc: denied { perfmon } for pid=1474 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.817000 audit[1474]: AVC avc: denied { perfmon } for pid=1474 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.817000 audit[1474]: AVC avc: denied { perfmon } for pid=1474 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.817000 audit[1474]: AVC avc: denied { perfmon } for pid=1474 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.817000 audit[1474]: AVC avc: denied { perfmon } for pid=1474 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.817000 audit[1474]: AVC avc: denied { bpf } for pid=1474 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.817000 audit[1474]: AVC avc: denied { bpf } for pid=1474 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.817000 audit: BPF prog-id=73 op=LOAD Oct 2 20:33:13.817000 audit[1474]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000298ca8 items=0 ppid=1434 pid=1474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:13.817000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838363531646365313935653463356164323965356633336364646363 Oct 2 20:33:13.818000 audit: BPF prog-id=73 op=UNLOAD Oct 2 20:33:13.818000 audit: BPF prog-id=72 op=UNLOAD Oct 2 20:33:13.818000 audit[1474]: AVC avc: denied { bpf } for pid=1474 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.818000 audit[1474]: AVC avc: denied { bpf } for pid=1474 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.818000 audit[1474]: AVC avc: denied { bpf } for pid=1474 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.818000 audit[1474]: AVC avc: denied { perfmon } for pid=1474 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.818000 audit[1474]: AVC avc: denied { perfmon } for pid=1474 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.818000 audit[1474]: AVC avc: denied { perfmon } for pid=1474 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.818000 audit[1474]: AVC avc: denied { perfmon } for pid=1474 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.818000 audit[1474]: AVC avc: denied { perfmon } for pid=1474 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.818000 audit[1474]: AVC avc: denied { bpf } for pid=1474 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.818000 audit[1474]: AVC avc: denied { bpf } for pid=1474 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:13.818000 audit: BPF prog-id=74 op=LOAD Oct 2 20:33:13.818000 audit[1474]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0002990b8 items=0 ppid=1434 pid=1474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:13.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838363531646365313935653463356164323965356633336364646363 Oct 2 20:33:13.836798 env[1057]: time="2023-10-02T20:33:13.836719229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pxpmm,Uid:6258f96c-67ee-4076-9b7a-b023abccd2f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\"" Oct 2 20:33:14.491636 kubelet[1380]: E1002 20:33:14.491555 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:15.492286 kubelet[1380]: E1002 20:33:15.492231 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:15.559408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1785369766.mount: Deactivated successfully. Oct 2 20:33:16.416813 env[1057]: time="2023-10-02T20:33:16.416633878Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:33:16.419328 env[1057]: time="2023-10-02T20:33:16.419270974Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:33:16.423498 env[1057]: time="2023-10-02T20:33:16.423444321Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:33:16.427450 env[1057]: time="2023-10-02T20:33:16.427388659Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.2\" returns image reference \"sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0\"" Oct 2 20:33:16.427742 env[1057]: time="2023-10-02T20:33:16.427604443Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:33:16.429491 env[1057]: time="2023-10-02T20:33:16.428876609Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 20:33:16.432488 env[1057]: time="2023-10-02T20:33:16.432430134Z" level=info msg="CreateContainer within sandbox \"8213b6215d4f0663361d161c57e5bfa9b568721ef78d2ddf222330ec48962ef4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 20:33:16.456423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3629143985.mount: Deactivated successfully. Oct 2 20:33:16.461774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount146164800.mount: Deactivated successfully. Oct 2 20:33:16.473631 env[1057]: time="2023-10-02T20:33:16.473553621Z" level=info msg="CreateContainer within sandbox \"8213b6215d4f0663361d161c57e5bfa9b568721ef78d2ddf222330ec48962ef4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"79d38996937b84365325a105274135c07b7f1f098ef621b986422ff395eb6e54\"" Oct 2 20:33:16.475165 env[1057]: time="2023-10-02T20:33:16.475088229Z" level=info msg="StartContainer for \"79d38996937b84365325a105274135c07b7f1f098ef621b986422ff395eb6e54\"" Oct 2 20:33:16.493071 kubelet[1380]: E1002 20:33:16.492966 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:16.513070 systemd[1]: Started cri-containerd-79d38996937b84365325a105274135c07b7f1f098ef621b986422ff395eb6e54.scope. Oct 2 20:33:16.541189 kernel: kauditd_printk_skb: 157 callbacks suppressed Oct 2 20:33:16.541363 kernel: audit: type=1400 audit(1696278796.535:598): avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.535000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.535000 audit[1522]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1450 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739643338393936393337623834333635333235613130353237343133 Oct 2 20:33:16.551884 kernel: audit: type=1300 audit(1696278796.535:598): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=8 items=0 ppid=1450 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.551937 kernel: audit: type=1327 audit(1696278796.535:598): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739643338393936393337623834333635333235613130353237343133 Oct 2 20:33:16.555550 kernel: audit: type=1400 audit(1696278796.535:599): avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.555613 kernel: audit: type=1400 audit(1696278796.535:599): avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.535000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.535000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.535000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.535000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.566312 kernel: audit: type=1400 audit(1696278796.535:599): avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.566406 kernel: audit: type=1400 audit(1696278796.535:599): avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.535000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.574232 kernel: audit: type=1400 audit(1696278796.535:599): avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.574338 kernel: audit: type=1400 audit(1696278796.535:599): avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.574358 kernel: audit: type=1400 audit(1696278796.535:599): avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.535000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.535000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.535000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.535000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.535000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.535000 audit: BPF prog-id=75 op=LOAD Oct 2 20:33:16.535000 audit[1522]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c00027c060 items=0 ppid=1450 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739643338393936393337623834333635333235613130353237343133 Oct 2 20:33:16.540000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.540000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.540000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.540000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.540000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.540000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.540000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.540000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.540000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.540000 audit: BPF prog-id=76 op=LOAD Oct 2 20:33:16.540000 audit[1522]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c00027c0a8 items=0 ppid=1450 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739643338393936393337623834333635333235613130353237343133 Oct 2 20:33:16.540000 audit: BPF prog-id=76 op=UNLOAD Oct 2 20:33:16.540000 audit: BPF prog-id=75 op=UNLOAD Oct 2 20:33:16.540000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.540000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.540000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.540000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.540000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.540000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.540000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.540000 audit[1522]: AVC avc: denied { perfmon } for pid=1522 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.540000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.540000 audit[1522]: AVC avc: denied { bpf } for pid=1522 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:33:16.540000 audit: BPF prog-id=77 op=LOAD Oct 2 20:33:16.540000 audit[1522]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c00027c138 items=0 ppid=1450 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739643338393936393337623834333635333235613130353237343133 Oct 2 20:33:16.584556 env[1057]: time="2023-10-02T20:33:16.584523455Z" level=info msg="StartContainer for \"79d38996937b84365325a105274135c07b7f1f098ef621b986422ff395eb6e54\" returns successfully" Oct 2 20:33:16.646000 audit[1573]: NETFILTER_CFG table=mangle:14 family=10 entries=1 op=nft_register_chain pid=1573 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:16.646000 audit[1573]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2cf49dd0 a2=0 a3=7ffc2cf49dbc items=0 ppid=1532 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.646000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 20:33:16.647000 audit[1574]: NETFILTER_CFG table=nat:15 family=10 entries=1 op=nft_register_chain pid=1574 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:16.647000 audit[1574]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb74a6840 a2=0 a3=7fffb74a682c items=0 ppid=1532 pid=1574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.647000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 20:33:16.648000 audit[1575]: NETFILTER_CFG table=mangle:16 family=2 entries=1 op=nft_register_chain pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.648000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdfb7d8380 a2=0 a3=7ffdfb7d836c items=0 ppid=1532 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.648000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 20:33:16.649000 audit[1576]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_chain pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.649000 audit[1576]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe2eea7b30 a2=0 a3=7ffe2eea7b1c items=0 ppid=1532 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.649000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 20:33:16.650000 audit[1577]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1577 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:16.650000 audit[1577]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd54b41d10 a2=0 a3=7ffd54b41cfc items=0 ppid=1532 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.650000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 20:33:16.650000 audit[1578]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_chain pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.650000 audit[1578]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff0975dd30 a2=0 a3=7fff0975dd1c items=0 ppid=1532 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.650000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 20:33:16.722235 kubelet[1380]: I1002 20:33:16.721992 1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-n99dg" podStartSLOduration=2.106236248 podCreationTimestamp="2023-10-02 20:33:12 +0000 UTC" firstStartedPulling="2023-10-02 20:33:13.81279008 +0000 UTC m=+4.000578435" lastFinishedPulling="2023-10-02 20:33:16.428364339 +0000 UTC m=+6.616152764" observedRunningTime="2023-10-02 20:33:16.719856252 +0000 UTC m=+6.907644637" watchObservedRunningTime="2023-10-02 20:33:16.721810577 +0000 UTC m=+6.909598982" Oct 2 20:33:16.749000 audit[1579]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_chain pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.749000 audit[1579]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe72ab7a10 a2=0 a3=7ffe72ab79fc items=0 ppid=1532 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.749000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 20:33:16.755000 audit[1581]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.755000 audit[1581]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff443fcc50 a2=0 a3=7fff443fcc3c items=0 ppid=1532 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.755000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 20:33:16.763000 audit[1584]: NETFILTER_CFG table=filter:22 family=2 entries=2 op=nft_register_chain pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.763000 audit[1584]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe87186fa0 a2=0 a3=7ffe87186f8c items=0 ppid=1532 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.763000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 20:33:16.766000 audit[1585]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_chain pid=1585 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.766000 audit[1585]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff90c4d590 a2=0 a3=7fff90c4d57c items=0 ppid=1532 pid=1585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.766000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 20:33:16.771000 audit[1587]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=1587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.771000 audit[1587]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc30584e30 a2=0 a3=7ffc30584e1c items=0 ppid=1532 pid=1587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.771000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 20:33:16.774000 audit[1588]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=1588 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.774000 audit[1588]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff728989c0 a2=0 a3=7fff728989ac items=0 ppid=1532 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.774000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 20:33:16.782000 audit[1590]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=1590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.782000 audit[1590]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffa32c6500 a2=0 a3=7fffa32c64ec items=0 ppid=1532 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.782000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 20:33:16.791000 audit[1593]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=1593 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.791000 audit[1593]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd115a51a0 a2=0 a3=7ffd115a518c items=0 ppid=1532 pid=1593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.791000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 20:33:16.793000 audit[1594]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.793000 audit[1594]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd19c54330 a2=0 a3=7ffd19c5431c items=0 ppid=1532 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.793000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 20:33:16.799000 audit[1596]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1596 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.799000 audit[1596]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffced354c80 a2=0 a3=7ffced354c6c items=0 ppid=1532 pid=1596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.799000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 20:33:16.801000 audit[1597]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=1597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.801000 audit[1597]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd9d409230 a2=0 a3=7ffd9d40921c items=0 ppid=1532 pid=1597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.801000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 20:33:16.807000 audit[1599]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_rule pid=1599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.807000 audit[1599]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff5fbd9c80 a2=0 a3=7fff5fbd9c6c items=0 ppid=1532 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.807000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 20:33:16.816000 audit[1602]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=1602 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.816000 audit[1602]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc2e978120 a2=0 a3=7ffc2e97810c items=0 ppid=1532 pid=1602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.816000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 20:33:16.825000 audit[1605]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1605 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.825000 audit[1605]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffee86f3260 a2=0 a3=7ffee86f324c items=0 ppid=1532 pid=1605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.825000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 20:33:16.828000 audit[1606]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1606 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.828000 audit[1606]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc908e1df0 a2=0 a3=7ffc908e1ddc items=0 ppid=1532 pid=1606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.828000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 20:33:16.837000 audit[1608]: NETFILTER_CFG table=nat:35 family=2 entries=2 op=nft_register_chain pid=1608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.837000 audit[1608]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffdc6b4e0d0 a2=0 a3=7ffdc6b4e0bc items=0 ppid=1532 pid=1608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.837000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:33:16.893000 audit[1613]: NETFILTER_CFG table=nat:36 family=2 entries=2 op=nft_register_chain pid=1613 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.893000 audit[1613]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff6eae6c90 a2=0 a3=7fff6eae6c7c items=0 ppid=1532 pid=1613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.893000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:33:16.895000 audit[1614]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1614 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.895000 audit[1614]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcfd02b310 a2=0 a3=7ffcfd02b2fc items=0 ppid=1532 pid=1614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.895000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 20:33:16.900000 audit[1616]: NETFILTER_CFG table=nat:38 family=2 entries=2 op=nft_register_chain pid=1616 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:33:16.900000 audit[1616]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fffeda22e90 a2=0 a3=7fffeda22e7c items=0 ppid=1532 pid=1616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.900000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 20:33:16.931000 audit[1622]: NETFILTER_CFG table=filter:39 family=2 entries=8 op=nft_register_rule pid=1622 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 20:33:16.931000 audit[1622]: SYSCALL arch=c000003e syscall=46 success=yes exit=4956 a0=3 a1=7ffe062c7450 a2=0 a3=7ffe062c743c items=0 ppid=1532 pid=1622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.931000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:33:16.949000 audit[1622]: NETFILTER_CFG table=nat:40 family=2 entries=14 op=nft_register_chain pid=1622 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 20:33:16.949000 audit[1622]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffe062c7450 a2=0 a3=7ffe062c743c items=0 ppid=1532 pid=1622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.949000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:33:16.958000 audit[1629]: NETFILTER_CFG table=filter:41 family=2 entries=14 op=nft_register_rule pid=1629 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 20:33:16.958000 audit[1629]: SYSCALL arch=c000003e syscall=46 success=yes exit=4956 a0=3 a1=7ffe88999840 a2=0 a3=7ffe8899982c items=0 ppid=1532 pid=1629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.958000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:33:16.959000 audit[1629]: NETFILTER_CFG table=nat:42 family=2 entries=12 op=nft_register_rule pid=1629 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 20:33:16.959000 audit[1629]: SYSCALL arch=c000003e syscall=46 success=yes exit=2572 a0=3 a1=7ffe88999840 a2=0 a3=0 items=0 ppid=1532 pid=1629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.959000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:33:16.963000 audit[1630]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=1630 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:16.963000 audit[1630]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc7ae6d170 a2=0 a3=7ffc7ae6d15c items=0 ppid=1532 pid=1630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.963000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 20:33:16.968000 audit[1632]: NETFILTER_CFG table=filter:44 family=10 entries=2 op=nft_register_chain pid=1632 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:16.968000 audit[1632]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe33f1f410 a2=0 a3=7ffe33f1f3fc items=0 ppid=1532 pid=1632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.968000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 20:33:16.975000 audit[1635]: NETFILTER_CFG table=filter:45 family=10 entries=2 op=nft_register_chain pid=1635 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:16.975000 audit[1635]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc6984e470 a2=0 a3=7ffc6984e45c items=0 ppid=1532 pid=1635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.975000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 20:33:16.978000 audit[1636]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=1636 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:16.978000 audit[1636]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc8664290 a2=0 a3=7ffdc866427c items=0 ppid=1532 pid=1636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.978000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 20:33:16.981000 audit[1638]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_rule pid=1638 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:16.981000 audit[1638]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc24886410 a2=0 a3=7ffc248863fc items=0 ppid=1532 pid=1638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.981000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 20:33:16.982000 audit[1639]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_chain pid=1639 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:16.982000 audit[1639]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4b2e6160 a2=0 a3=7ffd4b2e614c items=0 ppid=1532 pid=1639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.982000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 20:33:16.985000 audit[1641]: NETFILTER_CFG table=filter:49 family=10 entries=1 op=nft_register_rule pid=1641 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:16.985000 audit[1641]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdba89f120 a2=0 a3=7ffdba89f10c items=0 ppid=1532 pid=1641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.985000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 20:33:16.989000 audit[1644]: NETFILTER_CFG table=filter:50 family=10 entries=2 op=nft_register_chain pid=1644 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:16.989000 audit[1644]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe401f29b0 a2=0 a3=7ffe401f299c items=0 ppid=1532 pid=1644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.989000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 20:33:16.990000 audit[1645]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=1645 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:16.990000 audit[1645]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc854ccde0 a2=0 a3=7ffc854ccdcc items=0 ppid=1532 pid=1645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.990000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 20:33:16.994000 audit[1647]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=1647 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:16.994000 audit[1647]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd09ee4ab0 a2=0 a3=7ffd09ee4a9c items=0 ppid=1532 pid=1647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.994000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 20:33:16.996000 audit[1648]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=1648 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:16.996000 audit[1648]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb9abc9d0 a2=0 a3=7fffb9abc9bc items=0 ppid=1532 pid=1648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:16.996000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 20:33:17.002000 audit[1650]: NETFILTER_CFG table=filter:54 family=10 entries=1 op=nft_register_rule pid=1650 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:17.002000 audit[1650]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffccdc59a60 a2=0 a3=7ffccdc59a4c items=0 ppid=1532 pid=1650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:17.002000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 20:33:17.011000 audit[1653]: NETFILTER_CFG table=filter:55 family=10 entries=1 op=nft_register_rule pid=1653 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:17.011000 audit[1653]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc85714900 a2=0 a3=7ffc857148ec items=0 ppid=1532 pid=1653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:17.011000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 20:33:17.019000 audit[1656]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_rule pid=1656 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:17.019000 audit[1656]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffff72bf390 a2=0 a3=7ffff72bf37c items=0 ppid=1532 pid=1656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:17.019000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 20:33:17.021000 audit[1657]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=1657 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:17.021000 audit[1657]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd69a00070 a2=0 a3=7ffd69a0005c items=0 ppid=1532 pid=1657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:17.021000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 20:33:17.026000 audit[1659]: NETFILTER_CFG table=nat:58 family=10 entries=2 op=nft_register_chain pid=1659 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:17.026000 audit[1659]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff6f58bff0 a2=0 a3=7fff6f58bfdc items=0 ppid=1532 pid=1659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:17.026000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:33:17.033000 audit[1662]: NETFILTER_CFG table=nat:59 family=10 entries=2 op=nft_register_chain pid=1662 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:17.033000 audit[1662]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff55cf3290 a2=0 a3=7fff55cf327c items=0 ppid=1532 pid=1662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:17.033000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:33:17.034000 audit[1663]: NETFILTER_CFG table=nat:60 family=10 entries=1 op=nft_register_chain pid=1663 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:17.034000 audit[1663]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe64bc5980 a2=0 a3=7ffe64bc596c items=0 ppid=1532 pid=1663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:17.034000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 20:33:17.039000 audit[1665]: NETFILTER_CFG table=nat:61 family=10 entries=2 op=nft_register_chain pid=1665 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:17.039000 audit[1665]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffee5171380 a2=0 a3=7ffee517136c items=0 ppid=1532 pid=1665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:17.039000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 20:33:17.040000 audit[1666]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=1666 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:17.040000 audit[1666]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd529e88d0 a2=0 a3=7ffd529e88bc items=0 ppid=1532 pid=1666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:17.040000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 20:33:17.042000 audit[1668]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_rule pid=1668 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:17.042000 audit[1668]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd34f08910 a2=0 a3=7ffd34f088fc items=0 ppid=1532 pid=1668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:17.042000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 20:33:17.045000 audit[1671]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=1671 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:33:17.045000 audit[1671]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff338d1110 a2=0 a3=7fff338d10fc items=0 ppid=1532 pid=1671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:17.045000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 20:33:17.048000 audit[1673]: NETFILTER_CFG table=filter:65 family=10 entries=3 op=nft_register_rule pid=1673 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 20:33:17.048000 audit[1673]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd8a00d0c0 a2=0 a3=7ffd8a00d0ac items=0 ppid=1532 pid=1673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:17.048000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:33:17.048000 audit[1673]: NETFILTER_CFG table=nat:66 family=10 entries=7 op=nft_register_chain pid=1673 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 20:33:17.048000 audit[1673]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffd8a00d0c0 a2=0 a3=7ffd8a00d0ac items=0 ppid=1532 pid=1673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:33:17.048000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:33:17.494507 kubelet[1380]: E1002 20:33:17.494432 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:18.494933 kubelet[1380]: E1002 20:33:18.494808 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:19.496309 kubelet[1380]: E1002 20:33:19.495793 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:20.496850 kubelet[1380]: E1002 20:33:20.496817 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:21.498116 kubelet[1380]: E1002 20:33:21.498044 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:22.498685 kubelet[1380]: E1002 20:33:22.498627 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:23.439087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount25020357.mount: Deactivated successfully. Oct 2 20:33:23.499824 kubelet[1380]: E1002 20:33:23.499683 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:24.500892 kubelet[1380]: E1002 20:33:24.500693 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:25.501829 kubelet[1380]: E1002 20:33:25.501751 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:26.502469 kubelet[1380]: E1002 20:33:26.502392 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:27.503802 kubelet[1380]: E1002 20:33:27.503684 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:27.630684 env[1057]: time="2023-10-02T20:33:27.630616830Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:33:27.633748 env[1057]: time="2023-10-02T20:33:27.633725214Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:33:27.637104 env[1057]: time="2023-10-02T20:33:27.637083268Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:33:27.638775 env[1057]: time="2023-10-02T20:33:27.638685430Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 2 20:33:27.641923 env[1057]: time="2023-10-02T20:33:27.641895976Z" level=info msg="CreateContainer within sandbox \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 20:33:27.654204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4286194698.mount: Deactivated successfully. Oct 2 20:33:27.659605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2358002659.mount: Deactivated successfully. Oct 2 20:33:27.668718 env[1057]: time="2023-10-02T20:33:27.668653760Z" level=info msg="CreateContainer within sandbox \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd\"" Oct 2 20:33:27.669582 env[1057]: time="2023-10-02T20:33:27.669549879Z" level=info msg="StartContainer for \"0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd\"" Oct 2 20:33:27.695155 systemd[1]: Started cri-containerd-0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd.scope. Oct 2 20:33:27.706980 systemd[1]: cri-containerd-0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd.scope: Deactivated successfully. Oct 2 20:33:28.296620 env[1057]: time="2023-10-02T20:33:28.296493449Z" level=info msg="shim disconnected" id=0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd Oct 2 20:33:28.296620 env[1057]: time="2023-10-02T20:33:28.296603687Z" level=warning msg="cleaning up after shim disconnected" id=0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd namespace=k8s.io Oct 2 20:33:28.296620 env[1057]: time="2023-10-02T20:33:28.296627953Z" level=info msg="cleaning up dead shim" Oct 2 20:33:28.320560 env[1057]: time="2023-10-02T20:33:28.320374690Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:33:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1699 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:33:28Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:33:28.321422 env[1057]: time="2023-10-02T20:33:28.321055434Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 20:33:28.322225 env[1057]: time="2023-10-02T20:33:28.322049198Z" level=error msg="Failed to pipe stderr of container \"0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd\"" error="reading from a closed fifo" Oct 2 20:33:28.322471 env[1057]: time="2023-10-02T20:33:28.322077210Z" level=error msg="Failed to pipe stdout of container \"0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd\"" error="reading from a closed fifo" Oct 2 20:33:28.328721 env[1057]: time="2023-10-02T20:33:28.328528916Z" level=error msg="StartContainer for \"0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:33:28.329672 kubelet[1380]: E1002 20:33:28.329542 1380 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd" Oct 2 20:33:28.330016 kubelet[1380]: E1002 20:33:28.329955 1380 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:33:28.330016 kubelet[1380]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:33:28.330016 kubelet[1380]: rm /hostbin/cilium-mount Oct 2 20:33:28.330322 kubelet[1380]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8l6qf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:33:28.330322 kubelet[1380]: E1002 20:33:28.330082 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:33:28.504590 kubelet[1380]: E1002 20:33:28.504503 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:28.653113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd-rootfs.mount: Deactivated successfully. Oct 2 20:33:28.749022 env[1057]: time="2023-10-02T20:33:28.748893527Z" level=info msg="CreateContainer within sandbox \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 20:33:28.785332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4266140839.mount: Deactivated successfully. Oct 2 20:33:28.799921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233268816.mount: Deactivated successfully. Oct 2 20:33:28.810711 env[1057]: time="2023-10-02T20:33:28.810607405Z" level=info msg="CreateContainer within sandbox \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75\"" Oct 2 20:33:28.812568 env[1057]: time="2023-10-02T20:33:28.812500054Z" level=info msg="StartContainer for \"d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75\"" Oct 2 20:33:28.849918 systemd[1]: Started cri-containerd-d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75.scope. Oct 2 20:33:28.872638 systemd[1]: cri-containerd-d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75.scope: Deactivated successfully. Oct 2 20:33:28.886790 env[1057]: time="2023-10-02T20:33:28.886714323Z" level=info msg="shim disconnected" id=d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75 Oct 2 20:33:28.886790 env[1057]: time="2023-10-02T20:33:28.886778304Z" level=warning msg="cleaning up after shim disconnected" id=d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75 namespace=k8s.io Oct 2 20:33:28.886790 env[1057]: time="2023-10-02T20:33:28.886793833Z" level=info msg="cleaning up dead shim" Oct 2 20:33:28.894650 env[1057]: time="2023-10-02T20:33:28.894599241Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:33:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1734 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:33:28Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:33:28.895095 env[1057]: time="2023-10-02T20:33:28.895038650Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 20:33:28.896250 env[1057]: time="2023-10-02T20:33:28.896191864Z" level=error msg="Failed to pipe stdout of container \"d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75\"" error="reading from a closed fifo" Oct 2 20:33:28.898214 env[1057]: time="2023-10-02T20:33:28.898167520Z" level=error msg="Failed to pipe stderr of container \"d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75\"" error="reading from a closed fifo" Oct 2 20:33:28.902438 env[1057]: time="2023-10-02T20:33:28.902380764Z" level=error msg="StartContainer for \"d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:33:28.902685 kubelet[1380]: E1002 20:33:28.902661 1380 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75" Oct 2 20:33:28.902817 kubelet[1380]: E1002 20:33:28.902797 1380 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:33:28.902817 kubelet[1380]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:33:28.902817 kubelet[1380]: rm /hostbin/cilium-mount Oct 2 20:33:28.902817 kubelet[1380]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8l6qf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:33:28.903055 kubelet[1380]: E1002 20:33:28.902853 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:33:29.505862 kubelet[1380]: E1002 20:33:29.505769 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:29.749889 kubelet[1380]: I1002 20:33:29.749806 1380 scope.go:117] "RemoveContainer" containerID="0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd" Oct 2 20:33:29.750607 kubelet[1380]: I1002 20:33:29.750552 1380 scope.go:117] "RemoveContainer" containerID="0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd" Oct 2 20:33:29.753601 env[1057]: time="2023-10-02T20:33:29.753523082Z" level=info msg="RemoveContainer for \"0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd\"" Oct 2 20:33:29.755692 env[1057]: time="2023-10-02T20:33:29.755626257Z" level=info msg="RemoveContainer for \"0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd\"" Oct 2 20:33:29.756505 env[1057]: time="2023-10-02T20:33:29.756120779Z" level=error msg="RemoveContainer for \"0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd\" failed" error="failed to set removing state for container \"0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd\": container is already in removing state" Oct 2 20:33:29.757729 kubelet[1380]: E1002 20:33:29.757685 1380 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd\": container is already in removing state" containerID="0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd" Oct 2 20:33:29.757883 kubelet[1380]: E1002 20:33:29.757785 1380 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd": container is already in removing state; Skipping pod "cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)" Oct 2 20:33:29.758736 kubelet[1380]: E1002 20:33:29.758678 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:33:29.761955 env[1057]: time="2023-10-02T20:33:29.761877601Z" level=info msg="RemoveContainer for \"0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd\" returns successfully" Oct 2 20:33:30.486881 kubelet[1380]: E1002 20:33:30.486800 1380 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:30.506156 kubelet[1380]: E1002 20:33:30.506073 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:30.763441 kubelet[1380]: E1002 20:33:30.763255 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:33:31.403665 kubelet[1380]: W1002 20:33:31.403548 1380 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6258f96c_67ee_4076_9b7a_b023abccd2f8.slice/cri-containerd-0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd.scope WatchSource:0}: container "0bcacbc6a7cb122f127915c0fa8461485d7b66e6291d395db34531e319b2a0cd" in namespace "k8s.io": not found Oct 2 20:33:31.507332 kubelet[1380]: E1002 20:33:31.507232 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:32.507499 kubelet[1380]: E1002 20:33:32.507423 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:33.508615 kubelet[1380]: E1002 20:33:33.508423 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:34.508723 kubelet[1380]: E1002 20:33:34.508670 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:34.519473 kubelet[1380]: W1002 20:33:34.519410 1380 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6258f96c_67ee_4076_9b7a_b023abccd2f8.slice/cri-containerd-d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75.scope WatchSource:0}: task d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75 not found: not found Oct 2 20:33:35.510100 kubelet[1380]: E1002 20:33:35.509959 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:36.510666 kubelet[1380]: E1002 20:33:36.510530 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:36.652706 update_engine[1051]: I1002 20:33:36.651386 1051 update_attempter.cc:505] Updating boot flags... Oct 2 20:33:37.511933 kubelet[1380]: E1002 20:33:37.511788 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:38.512907 kubelet[1380]: E1002 20:33:38.512765 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:39.512970 kubelet[1380]: E1002 20:33:39.512911 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:40.514781 kubelet[1380]: E1002 20:33:40.514731 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:41.516209 kubelet[1380]: E1002 20:33:41.516108 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:42.517860 kubelet[1380]: E1002 20:33:42.517787 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:42.682855 env[1057]: time="2023-10-02T20:33:42.682700424Z" level=info msg="CreateContainer within sandbox \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 20:33:42.706865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1951690101.mount: Deactivated successfully. Oct 2 20:33:42.723047 env[1057]: time="2023-10-02T20:33:42.722777453Z" level=info msg="CreateContainer within sandbox \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14\"" Oct 2 20:33:42.724801 env[1057]: time="2023-10-02T20:33:42.724684558Z" level=info msg="StartContainer for \"ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14\"" Oct 2 20:33:42.774416 systemd[1]: Started cri-containerd-ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14.scope. Oct 2 20:33:42.797681 systemd[1]: cri-containerd-ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14.scope: Deactivated successfully. Oct 2 20:33:42.812267 env[1057]: time="2023-10-02T20:33:42.812183060Z" level=info msg="shim disconnected" id=ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14 Oct 2 20:33:42.812267 env[1057]: time="2023-10-02T20:33:42.812263111Z" level=warning msg="cleaning up after shim disconnected" id=ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14 namespace=k8s.io Oct 2 20:33:42.812267 env[1057]: time="2023-10-02T20:33:42.812274673Z" level=info msg="cleaning up dead shim" Oct 2 20:33:42.820566 env[1057]: time="2023-10-02T20:33:42.820507187Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:33:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1784 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:33:42Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:33:42.821022 env[1057]: time="2023-10-02T20:33:42.820960518Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 20:33:42.823330 env[1057]: time="2023-10-02T20:33:42.823259500Z" level=error msg="Failed to pipe stdout of container \"ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14\"" error="reading from a closed fifo" Oct 2 20:33:42.823390 env[1057]: time="2023-10-02T20:33:42.823345451Z" level=error msg="Failed to pipe stderr of container \"ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14\"" error="reading from a closed fifo" Oct 2 20:33:42.827907 env[1057]: time="2023-10-02T20:33:42.827840210Z" level=error msg="StartContainer for \"ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:33:42.828374 kubelet[1380]: E1002 20:33:42.828349 1380 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14" Oct 2 20:33:42.828508 kubelet[1380]: E1002 20:33:42.828479 1380 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:33:42.828508 kubelet[1380]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:33:42.828508 kubelet[1380]: rm /hostbin/cilium-mount Oct 2 20:33:42.828508 kubelet[1380]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8l6qf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:33:42.828671 kubelet[1380]: E1002 20:33:42.828532 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:33:43.518971 kubelet[1380]: E1002 20:33:43.518889 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:43.701759 systemd[1]: run-containerd-runc-k8s.io-ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14-runc.8LRpoa.mount: Deactivated successfully. Oct 2 20:33:43.702007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14-rootfs.mount: Deactivated successfully. Oct 2 20:33:43.809645 kubelet[1380]: I1002 20:33:43.808913 1380 scope.go:117] "RemoveContainer" containerID="d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75" Oct 2 20:33:43.810176 kubelet[1380]: I1002 20:33:43.810113 1380 scope.go:117] "RemoveContainer" containerID="d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75" Oct 2 20:33:43.813699 env[1057]: time="2023-10-02T20:33:43.813605371Z" level=info msg="RemoveContainer for \"d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75\"" Oct 2 20:33:43.815259 env[1057]: time="2023-10-02T20:33:43.815161195Z" level=info msg="RemoveContainer for \"d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75\"" Oct 2 20:33:43.815732 env[1057]: time="2023-10-02T20:33:43.815571657Z" level=error msg="RemoveContainer for \"d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75\" failed" error="failed to set removing state for container \"d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75\": container is already in removing state" Oct 2 20:33:43.816452 kubelet[1380]: E1002 20:33:43.816422 1380 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75\": container is already in removing state" containerID="d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75" Oct 2 20:33:43.816677 kubelet[1380]: E1002 20:33:43.816652 1380 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75": container is already in removing state; Skipping pod "cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)" Oct 2 20:33:43.817506 kubelet[1380]: E1002 20:33:43.817475 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:33:43.822742 env[1057]: time="2023-10-02T20:33:43.822665518Z" level=info msg="RemoveContainer for \"d104f9567e969f29a81edaa08f61bb262a1fb1d17f778a57e1ddc48763588e75\" returns successfully" Oct 2 20:33:44.520199 kubelet[1380]: E1002 20:33:44.519991 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:45.520818 kubelet[1380]: E1002 20:33:45.520771 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:45.921839 kubelet[1380]: W1002 20:33:45.921744 1380 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6258f96c_67ee_4076_9b7a_b023abccd2f8.slice/cri-containerd-ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14.scope WatchSource:0}: task ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14 not found: not found Oct 2 20:33:46.522600 kubelet[1380]: E1002 20:33:46.522542 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:47.524284 kubelet[1380]: E1002 20:33:47.524215 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:48.525847 kubelet[1380]: E1002 20:33:48.525748 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:49.527945 kubelet[1380]: E1002 20:33:49.527853 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:50.486944 kubelet[1380]: E1002 20:33:50.486853 1380 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:50.529059 kubelet[1380]: E1002 20:33:50.529012 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:51.530777 kubelet[1380]: E1002 20:33:51.530618 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:52.531839 kubelet[1380]: E1002 20:33:52.531763 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:53.533374 kubelet[1380]: E1002 20:33:53.533304 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:54.534839 kubelet[1380]: E1002 20:33:54.534787 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:55.535757 kubelet[1380]: E1002 20:33:55.535700 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:56.537113 kubelet[1380]: E1002 20:33:56.536988 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:57.537971 kubelet[1380]: E1002 20:33:57.537775 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:58.538778 kubelet[1380]: E1002 20:33:58.538696 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:33:58.676601 kubelet[1380]: E1002 20:33:58.676545 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:33:59.540436 kubelet[1380]: E1002 20:33:59.540318 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:00.541429 kubelet[1380]: E1002 20:34:00.541263 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:01.542400 kubelet[1380]: E1002 20:34:01.542284 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:02.543102 kubelet[1380]: E1002 20:34:02.543029 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:03.543941 kubelet[1380]: E1002 20:34:03.543883 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:04.544241 kubelet[1380]: E1002 20:34:04.544181 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:05.545361 kubelet[1380]: E1002 20:34:05.545242 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:06.545427 kubelet[1380]: E1002 20:34:06.545362 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:07.547198 kubelet[1380]: E1002 20:34:07.547054 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:08.548295 kubelet[1380]: E1002 20:34:08.548238 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:09.549820 kubelet[1380]: E1002 20:34:09.549725 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:09.684736 env[1057]: time="2023-10-02T20:34:09.684560340Z" level=info msg="CreateContainer within sandbox \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 20:34:09.707038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2439079658.mount: Deactivated successfully. Oct 2 20:34:09.724566 env[1057]: time="2023-10-02T20:34:09.724481610Z" level=info msg="CreateContainer within sandbox \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71\"" Oct 2 20:34:09.727958 env[1057]: time="2023-10-02T20:34:09.727838259Z" level=info msg="StartContainer for \"270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71\"" Oct 2 20:34:09.779320 systemd[1]: Started cri-containerd-270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71.scope. Oct 2 20:34:09.797090 systemd[1]: cri-containerd-270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71.scope: Deactivated successfully. Oct 2 20:34:09.815391 env[1057]: time="2023-10-02T20:34:09.815277573Z" level=info msg="shim disconnected" id=270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71 Oct 2 20:34:09.815391 env[1057]: time="2023-10-02T20:34:09.815328609Z" level=warning msg="cleaning up after shim disconnected" id=270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71 namespace=k8s.io Oct 2 20:34:09.815391 env[1057]: time="2023-10-02T20:34:09.815339539Z" level=info msg="cleaning up dead shim" Oct 2 20:34:09.822918 env[1057]: time="2023-10-02T20:34:09.822865560Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:34:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1825 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:34:09Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:34:09.823354 env[1057]: time="2023-10-02T20:34:09.823301959Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 20:34:09.826255 env[1057]: time="2023-10-02T20:34:09.825174121Z" level=error msg="Failed to pipe stderr of container \"270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71\"" error="reading from a closed fifo" Oct 2 20:34:09.826312 env[1057]: time="2023-10-02T20:34:09.826193253Z" level=error msg="Failed to pipe stdout of container \"270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71\"" error="reading from a closed fifo" Oct 2 20:34:09.829279 env[1057]: time="2023-10-02T20:34:09.829240260Z" level=error msg="StartContainer for \"270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:34:09.830144 kubelet[1380]: E1002 20:34:09.829515 1380 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71" Oct 2 20:34:09.830144 kubelet[1380]: E1002 20:34:09.829668 1380 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:34:09.830144 kubelet[1380]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:34:09.830144 kubelet[1380]: rm /hostbin/cilium-mount Oct 2 20:34:09.830144 kubelet[1380]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8l6qf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:34:09.830144 kubelet[1380]: E1002 20:34:09.830097 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:34:09.885568 kubelet[1380]: I1002 20:34:09.885320 1380 scope.go:117] "RemoveContainer" containerID="ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14" Oct 2 20:34:09.887059 kubelet[1380]: I1002 20:34:09.886914 1380 scope.go:117] "RemoveContainer" containerID="ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14" Oct 2 20:34:09.889146 env[1057]: time="2023-10-02T20:34:09.889045999Z" level=info msg="RemoveContainer for \"ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14\"" Oct 2 20:34:09.890449 env[1057]: time="2023-10-02T20:34:09.890409787Z" level=info msg="RemoveContainer for \"ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14\"" Oct 2 20:34:09.890721 env[1057]: time="2023-10-02T20:34:09.890668893Z" level=error msg="RemoveContainer for \"ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14\" failed" error="failed to set removing state for container \"ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14\": container is already in removing state" Oct 2 20:34:09.891303 kubelet[1380]: E1002 20:34:09.891228 1380 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14\": container is already in removing state" containerID="ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14" Oct 2 20:34:09.891586 kubelet[1380]: E1002 20:34:09.891545 1380 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14": container is already in removing state; Skipping pod "cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)" Oct 2 20:34:09.894385 env[1057]: time="2023-10-02T20:34:09.894320906Z" level=info msg="RemoveContainer for \"ba0a20c209f3cb359ec381786ea04f735120e9c8e1a9db959445b690228f9f14\" returns successfully" Oct 2 20:34:09.894767 kubelet[1380]: E1002 20:34:09.894721 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:34:10.487262 kubelet[1380]: E1002 20:34:10.487203 1380 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:10.550886 kubelet[1380]: E1002 20:34:10.550796 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:10.700953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71-rootfs.mount: Deactivated successfully. Oct 2 20:34:11.551460 kubelet[1380]: E1002 20:34:11.551400 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:12.551910 kubelet[1380]: E1002 20:34:12.551843 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:12.924560 kubelet[1380]: W1002 20:34:12.924484 1380 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6258f96c_67ee_4076_9b7a_b023abccd2f8.slice/cri-containerd-270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71.scope WatchSource:0}: task 270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71 not found: not found Oct 2 20:34:13.553898 kubelet[1380]: E1002 20:34:13.553841 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:14.556035 kubelet[1380]: E1002 20:34:14.555833 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:15.556848 kubelet[1380]: E1002 20:34:15.556795 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:16.558175 kubelet[1380]: E1002 20:34:16.558071 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:17.559119 kubelet[1380]: E1002 20:34:17.558909 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:18.561013 kubelet[1380]: E1002 20:34:18.560959 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:19.562901 kubelet[1380]: E1002 20:34:19.562832 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:20.563042 kubelet[1380]: E1002 20:34:20.562995 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:21.563833 kubelet[1380]: E1002 20:34:21.563708 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:21.676961 kubelet[1380]: E1002 20:34:21.676843 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:34:22.565074 kubelet[1380]: E1002 20:34:22.564962 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:23.565682 kubelet[1380]: E1002 20:34:23.565585 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:24.566888 kubelet[1380]: E1002 20:34:24.566782 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:25.568110 kubelet[1380]: E1002 20:34:25.568036 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:26.569280 kubelet[1380]: E1002 20:34:26.569217 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:27.570537 kubelet[1380]: E1002 20:34:27.570484 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:28.571759 kubelet[1380]: E1002 20:34:28.571631 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:29.572253 kubelet[1380]: E1002 20:34:29.572174 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:30.487270 kubelet[1380]: E1002 20:34:30.487185 1380 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:30.573081 kubelet[1380]: E1002 20:34:30.572944 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:31.574250 kubelet[1380]: E1002 20:34:31.574090 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:32.575056 kubelet[1380]: E1002 20:34:32.574927 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:33.576119 kubelet[1380]: E1002 20:34:33.576014 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:33.677100 kubelet[1380]: E1002 20:34:33.677037 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:34:34.576261 kubelet[1380]: E1002 20:34:34.576212 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:35.578236 kubelet[1380]: E1002 20:34:35.578170 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:36.579925 kubelet[1380]: E1002 20:34:36.579788 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:37.581416 kubelet[1380]: E1002 20:34:37.581332 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:38.582948 kubelet[1380]: E1002 20:34:38.582837 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:39.583961 kubelet[1380]: E1002 20:34:39.583883 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:40.584201 kubelet[1380]: E1002 20:34:40.584071 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:41.584582 kubelet[1380]: E1002 20:34:41.584470 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:42.585257 kubelet[1380]: E1002 20:34:42.585211 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:43.586193 kubelet[1380]: E1002 20:34:43.586107 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:44.586536 kubelet[1380]: E1002 20:34:44.586487 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:45.586901 kubelet[1380]: E1002 20:34:45.586838 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:46.587170 kubelet[1380]: E1002 20:34:46.587039 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:47.588937 kubelet[1380]: E1002 20:34:47.588820 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:47.677317 kubelet[1380]: E1002 20:34:47.677267 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:34:48.589349 kubelet[1380]: E1002 20:34:48.589231 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:49.589652 kubelet[1380]: E1002 20:34:49.589590 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:50.486619 kubelet[1380]: E1002 20:34:50.486550 1380 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:50.590973 kubelet[1380]: E1002 20:34:50.590919 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:51.592660 kubelet[1380]: E1002 20:34:51.592615 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:52.594616 kubelet[1380]: E1002 20:34:52.594568 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:53.595643 kubelet[1380]: E1002 20:34:53.595591 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:54.596879 kubelet[1380]: E1002 20:34:54.596715 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:55.597750 kubelet[1380]: E1002 20:34:55.597706 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:56.599283 kubelet[1380]: E1002 20:34:56.599194 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:57.600067 kubelet[1380]: E1002 20:34:57.600017 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:58.601429 kubelet[1380]: E1002 20:34:58.601117 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:34:59.602014 kubelet[1380]: E1002 20:34:59.601959 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:00.602840 kubelet[1380]: E1002 20:35:00.602773 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:01.603178 kubelet[1380]: E1002 20:35:01.602901 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:01.678486 env[1057]: time="2023-10-02T20:35:01.678360900Z" level=info msg="CreateContainer within sandbox \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 20:35:01.700035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2243986105.mount: Deactivated successfully. Oct 2 20:35:01.708603 env[1057]: time="2023-10-02T20:35:01.708537475Z" level=info msg="CreateContainer within sandbox \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d\"" Oct 2 20:35:01.709372 env[1057]: time="2023-10-02T20:35:01.709298502Z" level=info msg="StartContainer for \"560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d\"" Oct 2 20:35:01.734031 systemd[1]: Started cri-containerd-560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d.scope. Oct 2 20:35:01.761921 systemd[1]: cri-containerd-560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d.scope: Deactivated successfully. Oct 2 20:35:01.788583 env[1057]: time="2023-10-02T20:35:01.788476423Z" level=info msg="shim disconnected" id=560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d Oct 2 20:35:01.788583 env[1057]: time="2023-10-02T20:35:01.788555632Z" level=warning msg="cleaning up after shim disconnected" id=560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d namespace=k8s.io Oct 2 20:35:01.788583 env[1057]: time="2023-10-02T20:35:01.788573325Z" level=info msg="cleaning up dead shim" Oct 2 20:35:01.805386 env[1057]: time="2023-10-02T20:35:01.805294002Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:35:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1869 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:35:01Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:35:01.806279 env[1057]: time="2023-10-02T20:35:01.806167180Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 20:35:01.808430 env[1057]: time="2023-10-02T20:35:01.808303354Z" level=error msg="Failed to pipe stdout of container \"560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d\"" error="reading from a closed fifo" Oct 2 20:35:01.808668 env[1057]: time="2023-10-02T20:35:01.808588439Z" level=error msg="Failed to pipe stderr of container \"560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d\"" error="reading from a closed fifo" Oct 2 20:35:01.812532 env[1057]: time="2023-10-02T20:35:01.812450219Z" level=error msg="StartContainer for \"560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:35:01.812810 kubelet[1380]: E1002 20:35:01.812732 1380 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d" Oct 2 20:35:01.813100 kubelet[1380]: E1002 20:35:01.812886 1380 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:35:01.813100 kubelet[1380]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:35:01.813100 kubelet[1380]: rm /hostbin/cilium-mount Oct 2 20:35:01.813100 kubelet[1380]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8l6qf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:35:01.813100 kubelet[1380]: E1002 20:35:01.812957 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:35:02.026961 kubelet[1380]: I1002 20:35:02.026421 1380 scope.go:117] "RemoveContainer" containerID="270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71" Oct 2 20:35:02.026961 kubelet[1380]: I1002 20:35:02.026904 1380 scope.go:117] "RemoveContainer" containerID="270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71" Oct 2 20:35:02.030038 env[1057]: time="2023-10-02T20:35:02.029978238Z" level=info msg="RemoveContainer for \"270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71\"" Oct 2 20:35:02.031070 env[1057]: time="2023-10-02T20:35:02.030950340Z" level=info msg="RemoveContainer for \"270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71\"" Oct 2 20:35:02.031620 env[1057]: time="2023-10-02T20:35:02.031460407Z" level=error msg="RemoveContainer for \"270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71\" failed" error="failed to set removing state for container \"270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71\": container is already in removing state" Oct 2 20:35:02.032113 kubelet[1380]: E1002 20:35:02.032080 1380 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71\": container is already in removing state" containerID="270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71" Oct 2 20:35:02.032385 kubelet[1380]: E1002 20:35:02.032358 1380 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71": container is already in removing state; Skipping pod "cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)" Oct 2 20:35:02.033278 kubelet[1380]: E1002 20:35:02.033239 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:35:02.039218 env[1057]: time="2023-10-02T20:35:02.039077667Z" level=info msg="RemoveContainer for \"270b4ce755bd7bdef95619bbaff3e2591f2a3dae4e3fd4687649ffb80f6e4f71\" returns successfully" Oct 2 20:35:02.603254 kubelet[1380]: E1002 20:35:02.603172 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:02.691157 systemd[1]: run-containerd-runc-k8s.io-560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d-runc.edfkHI.mount: Deactivated successfully. Oct 2 20:35:02.691266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d-rootfs.mount: Deactivated successfully. Oct 2 20:35:03.604305 kubelet[1380]: E1002 20:35:03.604254 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:04.606230 kubelet[1380]: E1002 20:35:04.606114 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:04.896762 kubelet[1380]: W1002 20:35:04.896474 1380 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6258f96c_67ee_4076_9b7a_b023abccd2f8.slice/cri-containerd-560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d.scope WatchSource:0}: task 560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d not found: not found Oct 2 20:35:05.607650 kubelet[1380]: E1002 20:35:05.607595 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:06.609095 kubelet[1380]: E1002 20:35:06.609032 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:07.610792 kubelet[1380]: E1002 20:35:07.610727 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:08.612929 kubelet[1380]: E1002 20:35:08.612836 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:09.614518 kubelet[1380]: E1002 20:35:09.614449 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:10.486784 kubelet[1380]: E1002 20:35:10.486677 1380 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:10.610281 kubelet[1380]: E1002 20:35:10.610222 1380 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Oct 2 20:35:10.615353 kubelet[1380]: E1002 20:35:10.615267 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:10.642686 kubelet[1380]: E1002 20:35:10.642615 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:35:11.615690 kubelet[1380]: E1002 20:35:11.615606 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:12.616112 kubelet[1380]: E1002 20:35:12.616048 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:12.677490 kubelet[1380]: E1002 20:35:12.677431 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:35:13.617189 kubelet[1380]: E1002 20:35:13.617099 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:14.619815 kubelet[1380]: E1002 20:35:14.619110 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:15.621642 kubelet[1380]: E1002 20:35:15.621566 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:15.644335 kubelet[1380]: E1002 20:35:15.644273 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:35:16.622992 kubelet[1380]: E1002 20:35:16.622936 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:17.624291 kubelet[1380]: E1002 20:35:17.624114 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:18.624478 kubelet[1380]: E1002 20:35:18.624416 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:19.625389 kubelet[1380]: E1002 20:35:19.625329 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:20.627023 kubelet[1380]: E1002 20:35:20.626966 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:20.645429 kubelet[1380]: E1002 20:35:20.645362 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:35:21.628429 kubelet[1380]: E1002 20:35:21.628254 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:22.629370 kubelet[1380]: E1002 20:35:22.629254 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:23.630187 kubelet[1380]: E1002 20:35:23.630092 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:24.631625 kubelet[1380]: E1002 20:35:24.631575 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:25.632743 kubelet[1380]: E1002 20:35:25.632689 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:25.649064 kubelet[1380]: E1002 20:35:25.649026 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:35:26.634180 kubelet[1380]: E1002 20:35:26.634037 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:27.635444 kubelet[1380]: E1002 20:35:27.635308 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:27.677435 kubelet[1380]: E1002 20:35:27.677346 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:35:28.636432 kubelet[1380]: E1002 20:35:28.636375 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:29.638504 kubelet[1380]: E1002 20:35:29.638371 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:30.486926 kubelet[1380]: E1002 20:35:30.486804 1380 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:30.638759 kubelet[1380]: E1002 20:35:30.638651 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:30.651603 kubelet[1380]: E1002 20:35:30.651359 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:35:31.639825 kubelet[1380]: E1002 20:35:31.639760 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:32.641600 kubelet[1380]: E1002 20:35:32.641544 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:33.642435 kubelet[1380]: E1002 20:35:33.642344 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:34.643811 kubelet[1380]: E1002 20:35:34.643765 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:35.645180 kubelet[1380]: E1002 20:35:35.645113 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:35.652439 kubelet[1380]: E1002 20:35:35.652417 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:35:36.646475 kubelet[1380]: E1002 20:35:36.646406 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:37.647910 kubelet[1380]: E1002 20:35:37.647845 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:38.649662 kubelet[1380]: E1002 20:35:38.649588 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:39.650947 kubelet[1380]: E1002 20:35:39.650865 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:39.676900 kubelet[1380]: E1002 20:35:39.676726 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:35:40.651037 kubelet[1380]: E1002 20:35:40.651007 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:40.653306 kubelet[1380]: E1002 20:35:40.653259 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:35:41.652553 kubelet[1380]: E1002 20:35:41.652476 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:42.652948 kubelet[1380]: E1002 20:35:42.652724 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:43.653959 kubelet[1380]: E1002 20:35:43.653808 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:44.654829 kubelet[1380]: E1002 20:35:44.654697 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:45.655632 kubelet[1380]: E1002 20:35:45.655576 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:45.656600 kubelet[1380]: E1002 20:35:45.655828 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:35:46.657774 kubelet[1380]: E1002 20:35:46.657682 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:47.658016 kubelet[1380]: E1002 20:35:47.657913 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:48.659019 kubelet[1380]: E1002 20:35:48.658857 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:49.659943 kubelet[1380]: E1002 20:35:49.659862 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:50.486361 kubelet[1380]: E1002 20:35:50.486280 1380 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:50.658733 kubelet[1380]: E1002 20:35:50.658611 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:35:50.660275 kubelet[1380]: E1002 20:35:50.660221 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:51.661404 kubelet[1380]: E1002 20:35:51.661321 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:52.662394 kubelet[1380]: E1002 20:35:52.662330 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:52.677510 kubelet[1380]: E1002 20:35:52.677471 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:35:53.664173 kubelet[1380]: E1002 20:35:53.664046 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:54.664339 kubelet[1380]: E1002 20:35:54.664262 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:55.661300 kubelet[1380]: E1002 20:35:55.661221 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:35:55.665418 kubelet[1380]: E1002 20:35:55.665384 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:56.666408 kubelet[1380]: E1002 20:35:56.666343 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:57.668087 kubelet[1380]: E1002 20:35:57.667940 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:58.668636 kubelet[1380]: E1002 20:35:58.668581 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:35:59.670432 kubelet[1380]: E1002 20:35:59.670306 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:00.663157 kubelet[1380]: E1002 20:36:00.663080 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:36:00.671478 kubelet[1380]: E1002 20:36:00.671429 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:01.672199 kubelet[1380]: E1002 20:36:01.672094 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:02.673690 kubelet[1380]: E1002 20:36:02.673628 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:03.675245 kubelet[1380]: E1002 20:36:03.675097 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:04.675547 kubelet[1380]: E1002 20:36:04.675376 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:05.666323 kubelet[1380]: E1002 20:36:05.666215 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:36:05.675658 kubelet[1380]: E1002 20:36:05.675622 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:06.676456 kubelet[1380]: E1002 20:36:06.676381 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:07.677601 kubelet[1380]: E1002 20:36:07.677518 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:07.678287 kubelet[1380]: E1002 20:36:07.677633 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:36:08.678245 kubelet[1380]: E1002 20:36:08.678177 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:09.679588 kubelet[1380]: E1002 20:36:09.679536 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:10.486970 kubelet[1380]: E1002 20:36:10.486898 1380 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:10.667731 kubelet[1380]: E1002 20:36:10.667681 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:36:10.680958 kubelet[1380]: E1002 20:36:10.680882 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:11.681717 kubelet[1380]: E1002 20:36:11.681648 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:12.682925 kubelet[1380]: E1002 20:36:12.682873 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:13.684438 kubelet[1380]: E1002 20:36:13.684372 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:14.694446 kubelet[1380]: E1002 20:36:14.694364 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:15.669808 kubelet[1380]: E1002 20:36:15.669697 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:36:15.696426 kubelet[1380]: E1002 20:36:15.696300 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:16.697563 kubelet[1380]: E1002 20:36:16.697144 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:17.699375 kubelet[1380]: E1002 20:36:17.699307 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:18.700465 kubelet[1380]: E1002 20:36:18.700403 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:19.701810 kubelet[1380]: E1002 20:36:19.701700 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:20.671268 kubelet[1380]: E1002 20:36:20.671090 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:36:20.677297 kubelet[1380]: E1002 20:36:20.677261 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:36:20.702577 kubelet[1380]: E1002 20:36:20.702429 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:21.703417 kubelet[1380]: E1002 20:36:21.703238 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:22.704067 kubelet[1380]: E1002 20:36:22.704004 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:23.704435 kubelet[1380]: E1002 20:36:23.704305 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:24.705167 kubelet[1380]: E1002 20:36:24.705038 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:25.673548 kubelet[1380]: E1002 20:36:25.673459 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:36:25.706002 kubelet[1380]: E1002 20:36:25.705960 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:26.706189 kubelet[1380]: E1002 20:36:26.706104 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:27.707584 kubelet[1380]: E1002 20:36:27.707489 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:28.708229 kubelet[1380]: E1002 20:36:28.708162 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:29.708437 kubelet[1380]: E1002 20:36:29.708326 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:30.486852 kubelet[1380]: E1002 20:36:30.486763 1380 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:30.674799 kubelet[1380]: E1002 20:36:30.674750 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:36:30.709703 kubelet[1380]: E1002 20:36:30.709558 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:31.681875 env[1057]: time="2023-10-02T20:36:31.681770482Z" level=info msg="CreateContainer within sandbox \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 2 20:36:31.697039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2922129242.mount: Deactivated successfully. Oct 2 20:36:31.709361 env[1057]: time="2023-10-02T20:36:31.709264130Z" level=info msg="CreateContainer within sandbox \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"e2c5da209bd61d31bf89e8e0123785cae4dad793b60a44150ba186d81fc2ddb0\"" Oct 2 20:36:31.710175 kubelet[1380]: E1002 20:36:31.709907 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:31.710972 env[1057]: time="2023-10-02T20:36:31.710949220Z" level=info msg="StartContainer for \"e2c5da209bd61d31bf89e8e0123785cae4dad793b60a44150ba186d81fc2ddb0\"" Oct 2 20:36:31.746806 systemd[1]: Started cri-containerd-e2c5da209bd61d31bf89e8e0123785cae4dad793b60a44150ba186d81fc2ddb0.scope. Oct 2 20:36:31.765234 systemd[1]: cri-containerd-e2c5da209bd61d31bf89e8e0123785cae4dad793b60a44150ba186d81fc2ddb0.scope: Deactivated successfully. Oct 2 20:36:31.777283 env[1057]: time="2023-10-02T20:36:31.777217191Z" level=info msg="shim disconnected" id=e2c5da209bd61d31bf89e8e0123785cae4dad793b60a44150ba186d81fc2ddb0 Oct 2 20:36:31.777573 env[1057]: time="2023-10-02T20:36:31.777542251Z" level=warning msg="cleaning up after shim disconnected" id=e2c5da209bd61d31bf89e8e0123785cae4dad793b60a44150ba186d81fc2ddb0 namespace=k8s.io Oct 2 20:36:31.777653 env[1057]: time="2023-10-02T20:36:31.777638090Z" level=info msg="cleaning up dead shim" Oct 2 20:36:31.789150 env[1057]: time="2023-10-02T20:36:31.789087372Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:36:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1918 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:36:31Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e2c5da209bd61d31bf89e8e0123785cae4dad793b60a44150ba186d81fc2ddb0/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:36:31.789496 env[1057]: time="2023-10-02T20:36:31.789447597Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 20:36:31.791658 env[1057]: time="2023-10-02T20:36:31.789756016Z" level=error msg="Failed to pipe stderr of container \"e2c5da209bd61d31bf89e8e0123785cae4dad793b60a44150ba186d81fc2ddb0\"" error="reading from a closed fifo" Oct 2 20:36:31.791717 env[1057]: time="2023-10-02T20:36:31.791267800Z" level=error msg="Failed to pipe stdout of container \"e2c5da209bd61d31bf89e8e0123785cae4dad793b60a44150ba186d81fc2ddb0\"" error="reading from a closed fifo" Oct 2 20:36:31.795439 env[1057]: time="2023-10-02T20:36:31.795360776Z" level=error msg="StartContainer for \"e2c5da209bd61d31bf89e8e0123785cae4dad793b60a44150ba186d81fc2ddb0\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:36:31.795992 kubelet[1380]: E1002 20:36:31.795737 1380 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e2c5da209bd61d31bf89e8e0123785cae4dad793b60a44150ba186d81fc2ddb0" Oct 2 20:36:31.795992 kubelet[1380]: E1002 20:36:31.795917 1380 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:36:31.795992 kubelet[1380]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:36:31.795992 kubelet[1380]: rm /hostbin/cilium-mount Oct 2 20:36:31.795992 kubelet[1380]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8l6qf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:36:31.795992 kubelet[1380]: E1002 20:36:31.795966 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:36:32.289653 kubelet[1380]: I1002 20:36:32.289599 1380 scope.go:117] "RemoveContainer" containerID="560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d" Oct 2 20:36:32.292266 kubelet[1380]: I1002 20:36:32.292180 1380 scope.go:117] "RemoveContainer" containerID="560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d" Oct 2 20:36:32.292771 env[1057]: time="2023-10-02T20:36:32.292674943Z" level=info msg="RemoveContainer for \"560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d\"" Oct 2 20:36:32.295805 env[1057]: time="2023-10-02T20:36:32.295486405Z" level=info msg="RemoveContainer for \"560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d\"" Oct 2 20:36:32.296101 env[1057]: time="2023-10-02T20:36:32.296025496Z" level=error msg="RemoveContainer for \"560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d\" failed" error="failed to set removing state for container \"560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d\": container is already in removing state" Oct 2 20:36:32.296378 kubelet[1380]: E1002 20:36:32.296333 1380 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d\": container is already in removing state" containerID="560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d" Oct 2 20:36:32.296697 kubelet[1380]: E1002 20:36:32.296397 1380 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d": container is already in removing state; Skipping pod "cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)" Oct 2 20:36:32.297036 kubelet[1380]: E1002 20:36:32.296957 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:36:32.298726 env[1057]: time="2023-10-02T20:36:32.298660317Z" level=info msg="RemoveContainer for \"560dbfcbfad48eb024976cf59ab6344bb6cd5f97a294d192e8b458c38cacfa9d\" returns successfully" Oct 2 20:36:32.694104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2c5da209bd61d31bf89e8e0123785cae4dad793b60a44150ba186d81fc2ddb0-rootfs.mount: Deactivated successfully. Oct 2 20:36:32.710484 kubelet[1380]: E1002 20:36:32.710431 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:33.712190 kubelet[1380]: E1002 20:36:33.712071 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:34.713483 kubelet[1380]: E1002 20:36:34.713351 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:34.893220 kubelet[1380]: W1002 20:36:34.893092 1380 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6258f96c_67ee_4076_9b7a_b023abccd2f8.slice/cri-containerd-e2c5da209bd61d31bf89e8e0123785cae4dad793b60a44150ba186d81fc2ddb0.scope WatchSource:0}: task e2c5da209bd61d31bf89e8e0123785cae4dad793b60a44150ba186d81fc2ddb0 not found: not found Oct 2 20:36:35.676262 kubelet[1380]: E1002 20:36:35.676177 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:36:35.713688 kubelet[1380]: E1002 20:36:35.713549 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:36.713938 kubelet[1380]: E1002 20:36:36.713856 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:37.715115 kubelet[1380]: E1002 20:36:37.715008 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:38.716944 kubelet[1380]: E1002 20:36:38.716876 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:39.717946 kubelet[1380]: E1002 20:36:39.717864 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:40.677573 kubelet[1380]: E1002 20:36:40.677507 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:36:40.718115 kubelet[1380]: E1002 20:36:40.718009 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:41.718380 kubelet[1380]: E1002 20:36:41.718296 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:42.718847 kubelet[1380]: E1002 20:36:42.718756 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:43.719343 kubelet[1380]: E1002 20:36:43.719274 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:44.677913 kubelet[1380]: E1002 20:36:44.677834 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:36:44.720857 kubelet[1380]: E1002 20:36:44.720790 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:45.679347 kubelet[1380]: E1002 20:36:45.679271 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:36:45.722215 kubelet[1380]: E1002 20:36:45.722108 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:46.722339 kubelet[1380]: E1002 20:36:46.722281 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:47.723315 kubelet[1380]: E1002 20:36:47.723251 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:48.724288 kubelet[1380]: E1002 20:36:48.724224 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:49.725840 kubelet[1380]: E1002 20:36:49.725734 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:50.487311 kubelet[1380]: E1002 20:36:50.487230 1380 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:50.681042 kubelet[1380]: E1002 20:36:50.680990 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:36:50.726500 kubelet[1380]: E1002 20:36:50.726425 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:51.728371 kubelet[1380]: E1002 20:36:51.728289 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:52.729271 kubelet[1380]: E1002 20:36:52.729170 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:53.730473 kubelet[1380]: E1002 20:36:53.730279 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:54.731625 kubelet[1380]: E1002 20:36:54.731499 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:55.683809 kubelet[1380]: E1002 20:36:55.683736 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:36:55.732294 kubelet[1380]: E1002 20:36:55.732209 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:56.733339 kubelet[1380]: E1002 20:36:56.733284 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:57.735297 kubelet[1380]: E1002 20:36:57.735234 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:58.677674 kubelet[1380]: E1002 20:36:58.677603 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-pxpmm_kube-system(6258f96c-67ee-4076-9b7a-b023abccd2f8)\"" pod="kube-system/cilium-pxpmm" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" Oct 2 20:36:58.736452 kubelet[1380]: E1002 20:36:58.736330 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:59.369937 env[1057]: time="2023-10-02T20:36:59.369834875Z" level=info msg="StopPodSandbox for \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\"" Oct 2 20:36:59.373759 env[1057]: time="2023-10-02T20:36:59.369957445Z" level=info msg="Container to stop \"e2c5da209bd61d31bf89e8e0123785cae4dad793b60a44150ba186d81fc2ddb0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:36:59.372772 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc-shm.mount: Deactivated successfully. Oct 2 20:36:59.387000 audit: BPF prog-id=71 op=UNLOAD Oct 2 20:36:59.388402 systemd[1]: cri-containerd-88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc.scope: Deactivated successfully. Oct 2 20:36:59.391578 kernel: kauditd_printk_skb: 192 callbacks suppressed Oct 2 20:36:59.391765 kernel: audit: type=1334 audit(1696279019.387:657): prog-id=71 op=UNLOAD Oct 2 20:36:59.400682 kernel: audit: type=1334 audit(1696279019.395:658): prog-id=74 op=UNLOAD Oct 2 20:36:59.395000 audit: BPF prog-id=74 op=UNLOAD Oct 2 20:36:59.441339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc-rootfs.mount: Deactivated successfully. Oct 2 20:36:59.451599 env[1057]: time="2023-10-02T20:36:59.451530119Z" level=info msg="shim disconnected" id=88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc Oct 2 20:36:59.452458 env[1057]: time="2023-10-02T20:36:59.452422812Z" level=warning msg="cleaning up after shim disconnected" id=88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc namespace=k8s.io Oct 2 20:36:59.452617 env[1057]: time="2023-10-02T20:36:59.452593793Z" level=info msg="cleaning up dead shim" Oct 2 20:36:59.464567 env[1057]: time="2023-10-02T20:36:59.464512694Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:36:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1954 runtime=io.containerd.runc.v2\n" Oct 2 20:36:59.465226 env[1057]: time="2023-10-02T20:36:59.465196347Z" level=info msg="TearDown network for sandbox \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\" successfully" Oct 2 20:36:59.465336 env[1057]: time="2023-10-02T20:36:59.465316191Z" level=info msg="StopPodSandbox for \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\" returns successfully" Oct 2 20:36:59.643678 kubelet[1380]: I1002 20:36:59.643487 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-cilium-cgroup\") pod \"6258f96c-67ee-4076-9b7a-b023abccd2f8\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " Oct 2 20:36:59.644927 kubelet[1380]: I1002 20:36:59.644828 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-etc-cni-netd\") pod \"6258f96c-67ee-4076-9b7a-b023abccd2f8\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " Oct 2 20:36:59.645309 kubelet[1380]: I1002 20:36:59.645239 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6258f96c-67ee-4076-9b7a-b023abccd2f8" (UID: "6258f96c-67ee-4076-9b7a-b023abccd2f8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:36:59.645476 kubelet[1380]: I1002 20:36:59.643609 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6258f96c-67ee-4076-9b7a-b023abccd2f8" (UID: "6258f96c-67ee-4076-9b7a-b023abccd2f8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:36:59.645563 kubelet[1380]: I1002 20:36:59.645479 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6258f96c-67ee-4076-9b7a-b023abccd2f8" (UID: "6258f96c-67ee-4076-9b7a-b023abccd2f8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:36:59.645760 kubelet[1380]: I1002 20:36:59.645703 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-host-proc-sys-kernel\") pod \"6258f96c-67ee-4076-9b7a-b023abccd2f8\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " Oct 2 20:36:59.646082 kubelet[1380]: I1002 20:36:59.646035 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l6qf\" (UniqueName: \"kubernetes.io/projected/6258f96c-67ee-4076-9b7a-b023abccd2f8-kube-api-access-8l6qf\") pod \"6258f96c-67ee-4076-9b7a-b023abccd2f8\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " Oct 2 20:36:59.646416 kubelet[1380]: I1002 20:36:59.646390 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-host-proc-sys-net\") pod \"6258f96c-67ee-4076-9b7a-b023abccd2f8\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " Oct 2 20:36:59.646705 kubelet[1380]: I1002 20:36:59.646642 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-cilium-run\") pod \"6258f96c-67ee-4076-9b7a-b023abccd2f8\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " Oct 2 20:36:59.646990 kubelet[1380]: I1002 20:36:59.646928 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-lib-modules\") pod \"6258f96c-67ee-4076-9b7a-b023abccd2f8\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " Oct 2 20:36:59.647310 kubelet[1380]: I1002 20:36:59.647252 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6258f96c-67ee-4076-9b7a-b023abccd2f8" (UID: "6258f96c-67ee-4076-9b7a-b023abccd2f8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:36:59.647457 kubelet[1380]: I1002 20:36:59.647334 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6258f96c-67ee-4076-9b7a-b023abccd2f8" (UID: "6258f96c-67ee-4076-9b7a-b023abccd2f8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:36:59.647457 kubelet[1380]: I1002 20:36:59.647378 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6258f96c-67ee-4076-9b7a-b023abccd2f8" (UID: "6258f96c-67ee-4076-9b7a-b023abccd2f8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:36:59.647764 kubelet[1380]: I1002 20:36:59.647735 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6258f96c-67ee-4076-9b7a-b023abccd2f8-cilium-config-path\") pod \"6258f96c-67ee-4076-9b7a-b023abccd2f8\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " Oct 2 20:36:59.648060 kubelet[1380]: I1002 20:36:59.648036 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6258f96c-67ee-4076-9b7a-b023abccd2f8-clustermesh-secrets\") pod \"6258f96c-67ee-4076-9b7a-b023abccd2f8\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " Oct 2 20:36:59.648384 kubelet[1380]: I1002 20:36:59.648360 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-xtables-lock\") pod \"6258f96c-67ee-4076-9b7a-b023abccd2f8\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " Oct 2 20:36:59.648702 kubelet[1380]: I1002 20:36:59.648677 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6258f96c-67ee-4076-9b7a-b023abccd2f8-hubble-tls\") pod \"6258f96c-67ee-4076-9b7a-b023abccd2f8\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " Oct 2 20:36:59.648987 kubelet[1380]: I1002 20:36:59.648926 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-cni-path\") pod \"6258f96c-67ee-4076-9b7a-b023abccd2f8\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " Oct 2 20:36:59.649295 kubelet[1380]: I1002 20:36:59.649235 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-hostproc\") pod \"6258f96c-67ee-4076-9b7a-b023abccd2f8\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " Oct 2 20:36:59.649586 kubelet[1380]: I1002 20:36:59.649522 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-bpf-maps\") pod \"6258f96c-67ee-4076-9b7a-b023abccd2f8\" (UID: \"6258f96c-67ee-4076-9b7a-b023abccd2f8\") " Oct 2 20:36:59.649866 kubelet[1380]: I1002 20:36:59.649841 1380 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-etc-cni-netd\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:36:59.650115 kubelet[1380]: I1002 20:36:59.650070 1380 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-host-proc-sys-kernel\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:36:59.650395 kubelet[1380]: I1002 20:36:59.650338 1380 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-host-proc-sys-net\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:36:59.650592 kubelet[1380]: I1002 20:36:59.650569 1380 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-cilium-run\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:36:59.650841 kubelet[1380]: I1002 20:36:59.650789 1380 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-cilium-cgroup\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:36:59.651036 kubelet[1380]: I1002 20:36:59.651013 1380 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-lib-modules\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:36:59.651360 kubelet[1380]: I1002 20:36:59.651286 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6258f96c-67ee-4076-9b7a-b023abccd2f8" (UID: "6258f96c-67ee-4076-9b7a-b023abccd2f8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:36:59.652578 kubelet[1380]: I1002 20:36:59.652510 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6258f96c-67ee-4076-9b7a-b023abccd2f8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6258f96c-67ee-4076-9b7a-b023abccd2f8" (UID: "6258f96c-67ee-4076-9b7a-b023abccd2f8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:36:59.653195 kubelet[1380]: I1002 20:36:59.653113 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-cni-path" (OuterVolumeSpecName: "cni-path") pod "6258f96c-67ee-4076-9b7a-b023abccd2f8" (UID: "6258f96c-67ee-4076-9b7a-b023abccd2f8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:36:59.653469 kubelet[1380]: I1002 20:36:59.653405 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-hostproc" (OuterVolumeSpecName: "hostproc") pod "6258f96c-67ee-4076-9b7a-b023abccd2f8" (UID: "6258f96c-67ee-4076-9b7a-b023abccd2f8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:36:59.653727 kubelet[1380]: I1002 20:36:59.653664 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6258f96c-67ee-4076-9b7a-b023abccd2f8" (UID: "6258f96c-67ee-4076-9b7a-b023abccd2f8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:36:59.659109 systemd[1]: var-lib-kubelet-pods-6258f96c\x2d67ee\x2d4076\x2d9b7a\x2db023abccd2f8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8l6qf.mount: Deactivated successfully. Oct 2 20:36:59.661319 kubelet[1380]: I1002 20:36:59.661247 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6258f96c-67ee-4076-9b7a-b023abccd2f8-kube-api-access-8l6qf" (OuterVolumeSpecName: "kube-api-access-8l6qf") pod "6258f96c-67ee-4076-9b7a-b023abccd2f8" (UID: "6258f96c-67ee-4076-9b7a-b023abccd2f8"). InnerVolumeSpecName "kube-api-access-8l6qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:36:59.667009 systemd[1]: var-lib-kubelet-pods-6258f96c\x2d67ee\x2d4076\x2d9b7a\x2db023abccd2f8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:36:59.670869 systemd[1]: var-lib-kubelet-pods-6258f96c\x2d67ee\x2d4076\x2d9b7a\x2db023abccd2f8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:36:59.672880 kubelet[1380]: I1002 20:36:59.672792 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6258f96c-67ee-4076-9b7a-b023abccd2f8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6258f96c-67ee-4076-9b7a-b023abccd2f8" (UID: "6258f96c-67ee-4076-9b7a-b023abccd2f8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:36:59.673743 kubelet[1380]: I1002 20:36:59.673685 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6258f96c-67ee-4076-9b7a-b023abccd2f8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6258f96c-67ee-4076-9b7a-b023abccd2f8" (UID: "6258f96c-67ee-4076-9b7a-b023abccd2f8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:36:59.737061 kubelet[1380]: E1002 20:36:59.736996 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:36:59.751823 kubelet[1380]: I1002 20:36:59.751705 1380 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-cni-path\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:36:59.751823 kubelet[1380]: I1002 20:36:59.751776 1380 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-hostproc\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:36:59.751823 kubelet[1380]: I1002 20:36:59.751804 1380 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-bpf-maps\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:36:59.751823 kubelet[1380]: I1002 20:36:59.751835 1380 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8l6qf\" (UniqueName: \"kubernetes.io/projected/6258f96c-67ee-4076-9b7a-b023abccd2f8-kube-api-access-8l6qf\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:36:59.752300 kubelet[1380]: I1002 20:36:59.751868 1380 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6258f96c-67ee-4076-9b7a-b023abccd2f8-cilium-config-path\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:36:59.752300 kubelet[1380]: I1002 20:36:59.751897 1380 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6258f96c-67ee-4076-9b7a-b023abccd2f8-xtables-lock\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:36:59.752300 kubelet[1380]: I1002 20:36:59.751928 1380 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6258f96c-67ee-4076-9b7a-b023abccd2f8-hubble-tls\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:36:59.752300 kubelet[1380]: I1002 20:36:59.751956 1380 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6258f96c-67ee-4076-9b7a-b023abccd2f8-clustermesh-secrets\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:37:00.375464 kubelet[1380]: I1002 20:37:00.375421 1380 scope.go:117] "RemoveContainer" containerID="e2c5da209bd61d31bf89e8e0123785cae4dad793b60a44150ba186d81fc2ddb0" Oct 2 20:37:00.383915 systemd[1]: Removed slice kubepods-burstable-pod6258f96c_67ee_4076_9b7a_b023abccd2f8.slice. Oct 2 20:37:00.385866 env[1057]: time="2023-10-02T20:37:00.385762278Z" level=info msg="RemoveContainer for \"e2c5da209bd61d31bf89e8e0123785cae4dad793b60a44150ba186d81fc2ddb0\"" Oct 2 20:37:00.392016 env[1057]: time="2023-10-02T20:37:00.391914696Z" level=info msg="RemoveContainer for \"e2c5da209bd61d31bf89e8e0123785cae4dad793b60a44150ba186d81fc2ddb0\" returns successfully" Oct 2 20:37:00.682659 kubelet[1380]: I1002 20:37:00.682571 1380 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" path="/var/lib/kubelet/pods/6258f96c-67ee-4076-9b7a-b023abccd2f8/volumes" Oct 2 20:37:00.685435 kubelet[1380]: E1002 20:37:00.685251 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:37:00.739016 kubelet[1380]: E1002 20:37:00.738863 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:01.739487 kubelet[1380]: E1002 20:37:01.739369 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:02.740047 kubelet[1380]: E1002 20:37:02.739943 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:03.740993 kubelet[1380]: E1002 20:37:03.740875 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:04.330066 kubelet[1380]: I1002 20:37:04.329988 1380 topology_manager.go:215] "Topology Admit Handler" podUID="886eee00-26df-4d0e-9667-d087c5f868c9" podNamespace="kube-system" podName="cilium-mtb4m" Oct 2 20:37:04.330607 kubelet[1380]: E1002 20:37:04.330576 1380 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" containerName="mount-cgroup" Oct 2 20:37:04.330851 kubelet[1380]: E1002 20:37:04.330826 1380 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" containerName="mount-cgroup" Oct 2 20:37:04.331072 kubelet[1380]: E1002 20:37:04.331046 1380 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" containerName="mount-cgroup" Oct 2 20:37:04.331338 kubelet[1380]: E1002 20:37:04.331312 1380 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" containerName="mount-cgroup" Oct 2 20:37:04.331634 kubelet[1380]: I1002 20:37:04.331569 1380 memory_manager.go:346] "RemoveStaleState removing state" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" containerName="mount-cgroup" Oct 2 20:37:04.331851 kubelet[1380]: I1002 20:37:04.331827 1380 memory_manager.go:346] "RemoveStaleState removing state" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" containerName="mount-cgroup" Oct 2 20:37:04.332114 kubelet[1380]: I1002 20:37:04.332048 1380 memory_manager.go:346] "RemoveStaleState removing state" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" containerName="mount-cgroup" Oct 2 20:37:04.332392 kubelet[1380]: I1002 20:37:04.332364 1380 memory_manager.go:346] "RemoveStaleState removing state" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" containerName="mount-cgroup" Oct 2 20:37:04.332697 kubelet[1380]: E1002 20:37:04.332670 1380 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" containerName="mount-cgroup" Oct 2 20:37:04.332914 kubelet[1380]: E1002 20:37:04.332890 1380 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" containerName="mount-cgroup" Oct 2 20:37:04.333168 kubelet[1380]: I1002 20:37:04.333103 1380 memory_manager.go:346] "RemoveStaleState removing state" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" containerName="mount-cgroup" Oct 2 20:37:04.333376 kubelet[1380]: I1002 20:37:04.333351 1380 memory_manager.go:346] "RemoveStaleState removing state" podUID="6258f96c-67ee-4076-9b7a-b023abccd2f8" containerName="mount-cgroup" Oct 2 20:37:04.355101 systemd[1]: Created slice kubepods-burstable-pod886eee00_26df_4d0e_9667_d087c5f868c9.slice. Oct 2 20:37:04.367996 kubelet[1380]: I1002 20:37:04.367913 1380 topology_manager.go:215] "Topology Admit Handler" podUID="4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-c6cll" Oct 2 20:37:04.369039 kubelet[1380]: W1002 20:37:04.368830 1380 reflector.go:535] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.24.4.121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.121' and this object Oct 2 20:37:04.369765 kubelet[1380]: E1002 20:37:04.369724 1380 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.24.4.121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.121' and this object Oct 2 20:37:04.370018 kubelet[1380]: W1002 20:37:04.369586 1380 reflector.go:535] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.24.4.121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.121' and this object Oct 2 20:37:04.370223 kubelet[1380]: E1002 20:37:04.370059 1380 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.24.4.121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.121' and this object Oct 2 20:37:04.370223 kubelet[1380]: W1002 20:37:04.369674 1380 reflector.go:535] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.24.4.121" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.121' and this object Oct 2 20:37:04.370223 kubelet[1380]: E1002 20:37:04.370183 1380 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.24.4.121" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.121' and this object Oct 2 20:37:04.370223 kubelet[1380]: W1002 20:37:04.368405 1380 reflector.go:535] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.24.4.121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.121' and this object Oct 2 20:37:04.370223 kubelet[1380]: E1002 20:37:04.370216 1380 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.24.4.121" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.24.4.121' and this object Oct 2 20:37:04.382531 systemd[1]: Created slice kubepods-besteffort-pod4becc7a7_a1b0_4f3d_9f5a_6ffc2b6648b9.slice. Oct 2 20:37:04.484989 kubelet[1380]: I1002 20:37:04.484862 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-cni-path\") pod \"cilium-mtb4m\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " pod="kube-system/cilium-mtb4m" Oct 2 20:37:04.484989 kubelet[1380]: I1002 20:37:04.484976 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-etc-cni-netd\") pod \"cilium-mtb4m\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " pod="kube-system/cilium-mtb4m" Oct 2 20:37:04.486018 kubelet[1380]: I1002 20:37:04.485037 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-host-proc-sys-kernel\") pod \"cilium-mtb4m\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " pod="kube-system/cilium-mtb4m" Oct 2 20:37:04.486018 kubelet[1380]: I1002 20:37:04.485099 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-c6cll\" (UID: \"4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9\") " pod="kube-system/cilium-operator-6bc8ccdb58-c6cll" Oct 2 20:37:04.486018 kubelet[1380]: I1002 20:37:04.485211 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjpxx\" (UniqueName: \"kubernetes.io/projected/4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9-kube-api-access-wjpxx\") pod \"cilium-operator-6bc8ccdb58-c6cll\" (UID: \"4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9\") " pod="kube-system/cilium-operator-6bc8ccdb58-c6cll" Oct 2 20:37:04.486018 kubelet[1380]: I1002 20:37:04.485277 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-hostproc\") pod \"cilium-mtb4m\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " pod="kube-system/cilium-mtb4m" Oct 2 20:37:04.486018 kubelet[1380]: I1002 20:37:04.485333 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-cilium-cgroup\") pod \"cilium-mtb4m\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " pod="kube-system/cilium-mtb4m" Oct 2 20:37:04.486018 kubelet[1380]: I1002 20:37:04.485391 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/886eee00-26df-4d0e-9667-d087c5f868c9-cilium-ipsec-secrets\") pod \"cilium-mtb4m\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " pod="kube-system/cilium-mtb4m" Oct 2 20:37:04.486018 kubelet[1380]: I1002 20:37:04.485446 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-xtables-lock\") pod \"cilium-mtb4m\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " pod="kube-system/cilium-mtb4m" Oct 2 20:37:04.486018 kubelet[1380]: I1002 20:37:04.485500 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/886eee00-26df-4d0e-9667-d087c5f868c9-cilium-config-path\") pod \"cilium-mtb4m\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " pod="kube-system/cilium-mtb4m" Oct 2 20:37:04.486018 kubelet[1380]: I1002 20:37:04.485558 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdkcq\" (UniqueName: \"kubernetes.io/projected/886eee00-26df-4d0e-9667-d087c5f868c9-kube-api-access-cdkcq\") pod \"cilium-mtb4m\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " pod="kube-system/cilium-mtb4m" Oct 2 20:37:04.486018 kubelet[1380]: I1002 20:37:04.485612 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-cilium-run\") pod \"cilium-mtb4m\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " pod="kube-system/cilium-mtb4m" Oct 2 20:37:04.486018 kubelet[1380]: I1002 20:37:04.485664 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-lib-modules\") pod \"cilium-mtb4m\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " pod="kube-system/cilium-mtb4m" Oct 2 20:37:04.486018 kubelet[1380]: I1002 20:37:04.485716 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-bpf-maps\") pod \"cilium-mtb4m\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " pod="kube-system/cilium-mtb4m" Oct 2 20:37:04.486018 kubelet[1380]: I1002 20:37:04.485769 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/886eee00-26df-4d0e-9667-d087c5f868c9-clustermesh-secrets\") pod \"cilium-mtb4m\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " pod="kube-system/cilium-mtb4m" Oct 2 20:37:04.486018 kubelet[1380]: I1002 20:37:04.485825 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-host-proc-sys-net\") pod \"cilium-mtb4m\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " pod="kube-system/cilium-mtb4m" Oct 2 20:37:04.486018 kubelet[1380]: I1002 20:37:04.485878 1380 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/886eee00-26df-4d0e-9667-d087c5f868c9-hubble-tls\") pod \"cilium-mtb4m\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " pod="kube-system/cilium-mtb4m" Oct 2 20:37:04.741509 kubelet[1380]: E1002 20:37:04.741353 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:05.589414 kubelet[1380]: E1002 20:37:05.589340 1380 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Oct 2 20:37:05.589949 kubelet[1380]: E1002 20:37:05.589915 1380 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9-cilium-config-path podName:4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9 nodeName:}" failed. No retries permitted until 2023-10-02 20:37:06.089861541 +0000 UTC m=+236.277649946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9-cilium-config-path") pod "cilium-operator-6bc8ccdb58-c6cll" (UID: "4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9") : failed to sync configmap cache: timed out waiting for the condition Oct 2 20:37:05.590560 kubelet[1380]: E1002 20:37:05.590311 1380 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Oct 2 20:37:05.590729 kubelet[1380]: E1002 20:37:05.590684 1380 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/886eee00-26df-4d0e-9667-d087c5f868c9-cilium-config-path podName:886eee00-26df-4d0e-9667-d087c5f868c9 nodeName:}" failed. No retries permitted until 2023-10-02 20:37:06.090629822 +0000 UTC m=+236.278418327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/886eee00-26df-4d0e-9667-d087c5f868c9-cilium-config-path") pod "cilium-mtb4m" (UID: "886eee00-26df-4d0e-9667-d087c5f868c9") : failed to sync configmap cache: timed out waiting for the condition Oct 2 20:37:05.590903 kubelet[1380]: E1002 20:37:05.590783 1380 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Oct 2 20:37:05.590903 kubelet[1380]: E1002 20:37:05.590886 1380 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/886eee00-26df-4d0e-9667-d087c5f868c9-clustermesh-secrets podName:886eee00-26df-4d0e-9667-d087c5f868c9 nodeName:}" failed. No retries permitted until 2023-10-02 20:37:06.090855535 +0000 UTC m=+236.278644111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/886eee00-26df-4d0e-9667-d087c5f868c9-clustermesh-secrets") pod "cilium-mtb4m" (UID: "886eee00-26df-4d0e-9667-d087c5f868c9") : failed to sync secret cache: timed out waiting for the condition Oct 2 20:37:05.687576 kubelet[1380]: E1002 20:37:05.687527 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:37:05.742487 kubelet[1380]: E1002 20:37:05.742398 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:06.169822 env[1057]: time="2023-10-02T20:37:06.169650503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mtb4m,Uid:886eee00-26df-4d0e-9667-d087c5f868c9,Namespace:kube-system,Attempt:0,}" Oct 2 20:37:06.193375 env[1057]: time="2023-10-02T20:37:06.193285955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-c6cll,Uid:4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9,Namespace:kube-system,Attempt:0,}" Oct 2 20:37:06.233077 env[1057]: time="2023-10-02T20:37:06.232912997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:37:06.233451 env[1057]: time="2023-10-02T20:37:06.233006192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:37:06.233451 env[1057]: time="2023-10-02T20:37:06.233044253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:37:06.233786 env[1057]: time="2023-10-02T20:37:06.233528802Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eac0e453ae18109b280d362aea6f296a61b3b0e49f94615825aac4f33d0808cc pid=1982 runtime=io.containerd.runc.v2 Oct 2 20:37:06.247615 env[1057]: time="2023-10-02T20:37:06.247403501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:37:06.247615 env[1057]: time="2023-10-02T20:37:06.247441532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:37:06.247615 env[1057]: time="2023-10-02T20:37:06.247454887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:37:06.247968 env[1057]: time="2023-10-02T20:37:06.247641958Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7e78c214026e4a86970acef74be8488b5c2826235fc1a0dc210625c9fb0a360 pid=1999 runtime=io.containerd.runc.v2 Oct 2 20:37:06.267012 systemd[1]: Started cri-containerd-b7e78c214026e4a86970acef74be8488b5c2826235fc1a0dc210625c9fb0a360.scope. Oct 2 20:37:06.278735 systemd[1]: Started cri-containerd-eac0e453ae18109b280d362aea6f296a61b3b0e49f94615825aac4f33d0808cc.scope. Oct 2 20:37:06.297528 kernel: audit: type=1400 audit(1696279026.288:659): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.297773 kernel: audit: type=1400 audit(1696279026.288:660): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.288000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.288000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.288000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.302554 kernel: audit: type=1400 audit(1696279026.288:661): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.288000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.288000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.314139 kernel: audit: type=1400 audit(1696279026.288:662): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.314212 kernel: audit: type=1400 audit(1696279026.288:663): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.288000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.319248 kernel: audit: type=1400 audit(1696279026.288:664): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.322853 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 20:37:06.322912 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 20:37:06.322933 kernel: audit: backlog limit exceeded Oct 2 20:37:06.288000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.327243 kernel: audit: type=1400 audit(1696279026.288:665): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.288000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.288000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.289000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.289000 audit: BPF prog-id=78 op=LOAD Oct 2 20:37:06.290000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.290000 audit[2009]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=1999 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:37:06.290000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237653738633231343032366534613836393730616365663734626538 Oct 2 20:37:06.290000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.290000 audit[2009]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=1999 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:37:06.290000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237653738633231343032366534613836393730616365663734626538 Oct 2 20:37:06.290000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.290000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.290000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.290000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.290000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.290000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.290000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.290000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.290000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.290000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.290000 audit: BPF prog-id=79 op=LOAD Oct 2 20:37:06.290000 audit[2009]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c0002168b0 items=0 ppid=1999 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:37:06.290000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237653738633231343032366534613836393730616365663734626538 Oct 2 20:37:06.296000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.296000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.296000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.296000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.296000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.296000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.296000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.296000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.296000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.296000 audit: BPF prog-id=80 op=LOAD Oct 2 20:37:06.296000 audit[2009]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c0002168f8 items=0 ppid=1999 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:37:06.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237653738633231343032366534613836393730616365663734626538 Oct 2 20:37:06.296000 audit: BPF prog-id=80 op=UNLOAD Oct 2 20:37:06.296000 audit: BPF prog-id=79 op=UNLOAD Oct 2 20:37:06.296000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.296000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.296000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.296000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.296000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.296000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.296000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.296000 audit[2009]: AVC avc: denied { perfmon } for pid=2009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.296000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.296000 audit[2009]: AVC avc: denied { bpf } for pid=2009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.296000 audit: BPF prog-id=81 op=LOAD Oct 2 20:37:06.296000 audit[2009]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c000216d08 items=0 ppid=1999 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:37:06.296000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237653738633231343032366534613836393730616365663734626538 Oct 2 20:37:06.319000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.319000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.319000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.319000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.319000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.319000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.319000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.319000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.319000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.319000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.329000 audit[2000]: AVC avc: denied { bpf } for pid=2000 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.329000 audit[2000]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=1982 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:37:06.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561633065343533616531383130396232383064333632616561366632 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { perfmon } for pid=2000 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=1982 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:37:06.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561633065343533616531383130396232383064333632616561366632 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { bpf } for pid=2000 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { bpf } for pid=2000 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { bpf } for pid=2000 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { perfmon } for pid=2000 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { perfmon } for pid=2000 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { perfmon } for pid=2000 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { perfmon } for pid=2000 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { perfmon } for pid=2000 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { bpf } for pid=2000 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { bpf } for pid=2000 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit: BPF prog-id=83 op=LOAD Oct 2 20:37:06.330000 audit[2000]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c0001851d0 items=0 ppid=1982 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:37:06.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561633065343533616531383130396232383064333632616561366632 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { bpf } for pid=2000 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { bpf } for pid=2000 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { perfmon } for pid=2000 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { perfmon } for pid=2000 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { perfmon } for pid=2000 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { perfmon } for pid=2000 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { perfmon } for pid=2000 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { bpf } for pid=2000 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { bpf } for pid=2000 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit: BPF prog-id=84 op=LOAD Oct 2 20:37:06.330000 audit[2000]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c000185218 items=0 ppid=1982 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:37:06.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561633065343533616531383130396232383064333632616561366632 Oct 2 20:37:06.330000 audit: BPF prog-id=84 op=UNLOAD Oct 2 20:37:06.330000 audit: BPF prog-id=83 op=UNLOAD Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { bpf } for pid=2000 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { bpf } for pid=2000 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { bpf } for pid=2000 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { perfmon } for pid=2000 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { perfmon } for pid=2000 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { perfmon } for pid=2000 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { perfmon } for pid=2000 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { perfmon } for pid=2000 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { bpf } for pid=2000 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit[2000]: AVC avc: denied { bpf } for pid=2000 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:06.330000 audit: BPF prog-id=85 op=LOAD Oct 2 20:37:06.330000 audit[2000]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c000185628 items=0 ppid=1982 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:37:06.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561633065343533616531383130396232383064333632616561366632 Oct 2 20:37:06.357441 env[1057]: time="2023-10-02T20:37:06.357374591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-c6cll,Uid:4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7e78c214026e4a86970acef74be8488b5c2826235fc1a0dc210625c9fb0a360\"" Oct 2 20:37:06.358531 env[1057]: time="2023-10-02T20:37:06.358491225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mtb4m,Uid:886eee00-26df-4d0e-9667-d087c5f868c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"eac0e453ae18109b280d362aea6f296a61b3b0e49f94615825aac4f33d0808cc\"" Oct 2 20:37:06.360553 env[1057]: time="2023-10-02T20:37:06.360508267Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 2 20:37:06.365354 env[1057]: time="2023-10-02T20:37:06.365299332Z" level=info msg="CreateContainer within sandbox \"eac0e453ae18109b280d362aea6f296a61b3b0e49f94615825aac4f33d0808cc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 20:37:06.389894 env[1057]: time="2023-10-02T20:37:06.389824313Z" level=info msg="CreateContainer within sandbox \"eac0e453ae18109b280d362aea6f296a61b3b0e49f94615825aac4f33d0808cc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c\"" Oct 2 20:37:06.390574 env[1057]: time="2023-10-02T20:37:06.390536860Z" level=info msg="StartContainer for \"ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c\"" Oct 2 20:37:06.412893 systemd[1]: Started cri-containerd-ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c.scope. Oct 2 20:37:06.428064 systemd[1]: cri-containerd-ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c.scope: Deactivated successfully. Oct 2 20:37:06.458485 env[1057]: time="2023-10-02T20:37:06.458425081Z" level=info msg="shim disconnected" id=ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c Oct 2 20:37:06.458688 env[1057]: time="2023-10-02T20:37:06.458487318Z" level=warning msg="cleaning up after shim disconnected" id=ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c namespace=k8s.io Oct 2 20:37:06.458688 env[1057]: time="2023-10-02T20:37:06.458500923Z" level=info msg="cleaning up dead shim" Oct 2 20:37:06.467726 env[1057]: time="2023-10-02T20:37:06.467672252Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:37:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2080 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:37:06Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:37:06.468040 env[1057]: time="2023-10-02T20:37:06.467968468Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Oct 2 20:37:06.469282 env[1057]: time="2023-10-02T20:37:06.469220646Z" level=error msg="Failed to pipe stdout of container \"ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c\"" error="reading from a closed fifo" Oct 2 20:37:06.469689 env[1057]: time="2023-10-02T20:37:06.469406034Z" level=error msg="Failed to pipe stderr of container \"ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c\"" error="reading from a closed fifo" Oct 2 20:37:06.473430 env[1057]: time="2023-10-02T20:37:06.473389153Z" level=error msg="StartContainer for \"ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:37:06.473922 kubelet[1380]: E1002 20:37:06.473698 1380 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c" Oct 2 20:37:06.473922 kubelet[1380]: E1002 20:37:06.473831 1380 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:37:06.473922 kubelet[1380]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:37:06.473922 kubelet[1380]: rm /hostbin/cilium-mount Oct 2 20:37:06.473922 kubelet[1380]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-cdkcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-mtb4m_kube-system(886eee00-26df-4d0e-9667-d087c5f868c9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:37:06.473922 kubelet[1380]: E1002 20:37:06.473880 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-mtb4m" podUID="886eee00-26df-4d0e-9667-d087c5f868c9" Oct 2 20:37:06.743774 kubelet[1380]: E1002 20:37:06.743550 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:07.406944 env[1057]: time="2023-10-02T20:37:07.406878171Z" level=info msg="CreateContainer within sandbox \"eac0e453ae18109b280d362aea6f296a61b3b0e49f94615825aac4f33d0808cc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 20:37:07.441965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2349842819.mount: Deactivated successfully. Oct 2 20:37:07.453992 env[1057]: time="2023-10-02T20:37:07.453804169Z" level=info msg="CreateContainer within sandbox \"eac0e453ae18109b280d362aea6f296a61b3b0e49f94615825aac4f33d0808cc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999\"" Oct 2 20:37:07.455775 env[1057]: time="2023-10-02T20:37:07.455712648Z" level=info msg="StartContainer for \"e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999\"" Oct 2 20:37:07.497851 systemd[1]: Started cri-containerd-e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999.scope. Oct 2 20:37:07.515271 systemd[1]: cri-containerd-e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999.scope: Deactivated successfully. Oct 2 20:37:07.528136 env[1057]: time="2023-10-02T20:37:07.528063286Z" level=info msg="shim disconnected" id=e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999 Oct 2 20:37:07.528370 env[1057]: time="2023-10-02T20:37:07.528351898Z" level=warning msg="cleaning up after shim disconnected" id=e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999 namespace=k8s.io Oct 2 20:37:07.528479 env[1057]: time="2023-10-02T20:37:07.528464669Z" level=info msg="cleaning up dead shim" Oct 2 20:37:07.539005 env[1057]: time="2023-10-02T20:37:07.538946656Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:37:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2118 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:37:07Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:37:07.539496 env[1057]: time="2023-10-02T20:37:07.539435513Z" level=error msg="copy shim log" error="read /proc/self/fd/38: file already closed" Oct 2 20:37:07.539802 env[1057]: time="2023-10-02T20:37:07.539729845Z" level=error msg="Failed to pipe stdout of container \"e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999\"" error="reading from a closed fifo" Oct 2 20:37:07.540220 env[1057]: time="2023-10-02T20:37:07.540177905Z" level=error msg="Failed to pipe stderr of container \"e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999\"" error="reading from a closed fifo" Oct 2 20:37:07.544384 env[1057]: time="2023-10-02T20:37:07.544343046Z" level=error msg="StartContainer for \"e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:37:07.544763 kubelet[1380]: E1002 20:37:07.544726 1380 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999" Oct 2 20:37:07.545371 kubelet[1380]: E1002 20:37:07.545339 1380 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:37:07.545371 kubelet[1380]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:37:07.545371 kubelet[1380]: rm /hostbin/cilium-mount Oct 2 20:37:07.545371 kubelet[1380]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-cdkcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-mtb4m_kube-system(886eee00-26df-4d0e-9667-d087c5f868c9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:37:07.545611 kubelet[1380]: E1002 20:37:07.545418 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-mtb4m" podUID="886eee00-26df-4d0e-9667-d087c5f868c9" Oct 2 20:37:07.744965 kubelet[1380]: E1002 20:37:07.744774 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:08.126594 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999-rootfs.mount: Deactivated successfully. Oct 2 20:37:08.407624 kubelet[1380]: I1002 20:37:08.407022 1380 scope.go:117] "RemoveContainer" containerID="ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c" Oct 2 20:37:08.407624 kubelet[1380]: I1002 20:37:08.407559 1380 scope.go:117] "RemoveContainer" containerID="ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c" Oct 2 20:37:08.408527 env[1057]: time="2023-10-02T20:37:08.408485005Z" level=info msg="RemoveContainer for \"ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c\"" Oct 2 20:37:08.412572 env[1057]: time="2023-10-02T20:37:08.412522255Z" level=info msg="RemoveContainer for \"ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c\" returns successfully" Oct 2 20:37:08.412911 env[1057]: time="2023-10-02T20:37:08.412782303Z" level=error msg="ContainerStatus for \"ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c\": not found" Oct 2 20:37:08.413666 kubelet[1380]: E1002 20:37:08.413073 1380 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c\": not found" containerID="ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c" Oct 2 20:37:08.413666 kubelet[1380]: E1002 20:37:08.413152 1380 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": failed to get container status "ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c": rpc error: code = NotFound desc = an error occurred when try to find container "ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c": not found; Skipping pod "cilium-mtb4m_kube-system(886eee00-26df-4d0e-9667-d087c5f868c9)" Oct 2 20:37:08.413666 kubelet[1380]: E1002 20:37:08.413444 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-mtb4m_kube-system(886eee00-26df-4d0e-9667-d087c5f868c9)\"" pod="kube-system/cilium-mtb4m" podUID="886eee00-26df-4d0e-9667-d087c5f868c9" Oct 2 20:37:08.745664 kubelet[1380]: E1002 20:37:08.745554 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:09.142227 env[1057]: time="2023-10-02T20:37:09.140555419Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:37:09.143613 env[1057]: time="2023-10-02T20:37:09.143536709Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:37:09.145475 env[1057]: time="2023-10-02T20:37:09.145426543Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:37:09.147217 env[1057]: time="2023-10-02T20:37:09.147099421Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 2 20:37:09.150398 env[1057]: time="2023-10-02T20:37:09.150341991Z" level=info msg="CreateContainer within sandbox \"b7e78c214026e4a86970acef74be8488b5c2826235fc1a0dc210625c9fb0a360\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 20:37:09.174461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3835821742.mount: Deactivated successfully. Oct 2 20:37:09.178724 env[1057]: time="2023-10-02T20:37:09.178655368Z" level=info msg="CreateContainer within sandbox \"b7e78c214026e4a86970acef74be8488b5c2826235fc1a0dc210625c9fb0a360\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be\"" Oct 2 20:37:09.179575 env[1057]: time="2023-10-02T20:37:09.179531039Z" level=info msg="StartContainer for \"277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be\"" Oct 2 20:37:09.181566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1990447604.mount: Deactivated successfully. Oct 2 20:37:09.210292 systemd[1]: Started cri-containerd-277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be.scope. Oct 2 20:37:09.236000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.236000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.237000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.237000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.237000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.237000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.237000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.237000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.237000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.238000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.238000 audit: BPF prog-id=86 op=LOAD Oct 2 20:37:09.240000 audit[2138]: AVC avc: denied { bpf } for pid=2138 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.240000 audit[2138]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001bdc48 a2=10 a3=1c items=0 ppid=1999 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:37:09.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237376365633634346531383163323030313231323462393034613230 Oct 2 20:37:09.240000 audit[2138]: AVC avc: denied { perfmon } for pid=2138 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.240000 audit[2138]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001bd6b0 a2=3c a3=8 items=0 ppid=1999 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:37:09.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237376365633634346531383163323030313231323462393034613230 Oct 2 20:37:09.240000 audit[2138]: AVC avc: denied { bpf } for pid=2138 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.240000 audit[2138]: AVC avc: denied { bpf } for pid=2138 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.240000 audit[2138]: AVC avc: denied { bpf } for pid=2138 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.240000 audit[2138]: AVC avc: denied { perfmon } for pid=2138 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.240000 audit[2138]: AVC avc: denied { perfmon } for pid=2138 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.240000 audit[2138]: AVC avc: denied { perfmon } for pid=2138 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.240000 audit[2138]: AVC avc: denied { perfmon } for pid=2138 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.240000 audit[2138]: AVC avc: denied { perfmon } for pid=2138 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.240000 audit[2138]: AVC avc: denied { bpf } for pid=2138 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.240000 audit[2138]: AVC avc: denied { bpf } for pid=2138 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.240000 audit: BPF prog-id=87 op=LOAD Oct 2 20:37:09.240000 audit[2138]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bd9d8 a2=78 a3=c00039c9a0 items=0 ppid=1999 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:37:09.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237376365633634346531383163323030313231323462393034613230 Oct 2 20:37:09.241000 audit[2138]: AVC avc: denied { bpf } for pid=2138 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.241000 audit[2138]: AVC avc: denied { bpf } for pid=2138 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.241000 audit[2138]: AVC avc: denied { perfmon } for pid=2138 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.241000 audit[2138]: AVC avc: denied { perfmon } for pid=2138 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.241000 audit[2138]: AVC avc: denied { perfmon } for pid=2138 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.241000 audit[2138]: AVC avc: denied { perfmon } for pid=2138 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.241000 audit[2138]: AVC avc: denied { perfmon } for pid=2138 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.241000 audit[2138]: AVC avc: denied { bpf } for pid=2138 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.241000 audit[2138]: AVC avc: denied { bpf } for pid=2138 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.241000 audit: BPF prog-id=88 op=LOAD Oct 2 20:37:09.241000 audit[2138]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001bd770 a2=78 a3=c00039c9e8 items=0 ppid=1999 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:37:09.241000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237376365633634346531383163323030313231323462393034613230 Oct 2 20:37:09.242000 audit: BPF prog-id=88 op=UNLOAD Oct 2 20:37:09.242000 audit: BPF prog-id=87 op=UNLOAD Oct 2 20:37:09.242000 audit[2138]: AVC avc: denied { bpf } for pid=2138 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.242000 audit[2138]: AVC avc: denied { bpf } for pid=2138 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.242000 audit[2138]: AVC avc: denied { bpf } for pid=2138 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.242000 audit[2138]: AVC avc: denied { perfmon } for pid=2138 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.242000 audit[2138]: AVC avc: denied { perfmon } for pid=2138 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.242000 audit[2138]: AVC avc: denied { perfmon } for pid=2138 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.242000 audit[2138]: AVC avc: denied { perfmon } for pid=2138 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.242000 audit[2138]: AVC avc: denied { perfmon } for pid=2138 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.242000 audit[2138]: AVC avc: denied { bpf } for pid=2138 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.242000 audit[2138]: AVC avc: denied { bpf } for pid=2138 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:37:09.242000 audit: BPF prog-id=89 op=LOAD Oct 2 20:37:09.242000 audit[2138]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bdc30 a2=78 a3=c00039cdf8 items=0 ppid=1999 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:37:09.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237376365633634346531383163323030313231323462393034613230 Oct 2 20:37:09.266380 env[1057]: time="2023-10-02T20:37:09.266309876Z" level=info msg="StartContainer for \"277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be\" returns successfully" Oct 2 20:37:09.283000 audit[2149]: AVC avc: denied { map_create } for pid=2149 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c127,c264 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c127,c264 tclass=bpf permissive=0 Oct 2 20:37:09.283000 audit[2149]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c0004ed7d0 a2=48 a3=c0004ed7c0 items=0 ppid=1999 pid=2149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c127,c264 key=(null) Oct 2 20:37:09.283000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 20:37:09.412667 kubelet[1380]: E1002 20:37:09.412641 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-mtb4m_kube-system(886eee00-26df-4d0e-9667-d087c5f868c9)\"" pod="kube-system/cilium-mtb4m" podUID="886eee00-26df-4d0e-9667-d087c5f868c9" Oct 2 20:37:09.533229 kubelet[1380]: I1002 20:37:09.533069 1380 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-c6cll" podStartSLOduration=2.744724218 podCreationTimestamp="2023-10-02 20:37:04 +0000 UTC" firstStartedPulling="2023-10-02 20:37:06.35952888 +0000 UTC m=+236.547317235" lastFinishedPulling="2023-10-02 20:37:09.147781099 +0000 UTC m=+239.335569454" observedRunningTime="2023-10-02 20:37:09.456358216 +0000 UTC m=+239.644146601" watchObservedRunningTime="2023-10-02 20:37:09.532976437 +0000 UTC m=+239.720764842" Oct 2 20:37:09.567472 kubelet[1380]: W1002 20:37:09.567345 1380 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod886eee00_26df_4d0e_9667_d087c5f868c9.slice/cri-containerd-ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c.scope WatchSource:0}: container "ced150a390f1308b50f33328f61b4bc1cbc8a80f608ce0c5ff4528142273908c" in namespace "k8s.io": not found Oct 2 20:37:09.746667 kubelet[1380]: E1002 20:37:09.746397 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:10.486837 kubelet[1380]: E1002 20:37:10.486792 1380 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:10.549472 env[1057]: time="2023-10-02T20:37:10.549356672Z" level=info msg="StopPodSandbox for \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\"" Oct 2 20:37:10.552029 env[1057]: time="2023-10-02T20:37:10.549566795Z" level=info msg="TearDown network for sandbox \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\" successfully" Oct 2 20:37:10.552029 env[1057]: time="2023-10-02T20:37:10.549643458Z" level=info msg="StopPodSandbox for \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\" returns successfully" Oct 2 20:37:10.552029 env[1057]: time="2023-10-02T20:37:10.550688619Z" level=info msg="RemovePodSandbox for \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\"" Oct 2 20:37:10.552029 env[1057]: time="2023-10-02T20:37:10.550783396Z" level=info msg="Forcibly stopping sandbox \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\"" Oct 2 20:37:10.552029 env[1057]: time="2023-10-02T20:37:10.551067730Z" level=info msg="TearDown network for sandbox \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\" successfully" Oct 2 20:37:10.561349 env[1057]: time="2023-10-02T20:37:10.561245366Z" level=info msg="RemovePodSandbox \"88651dce195e4c5ad29e5f33cddccd034d17964f9fc79aafb495f166b5830acc\" returns successfully" Oct 2 20:37:10.689269 kubelet[1380]: E1002 20:37:10.689226 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:37:10.747410 kubelet[1380]: E1002 20:37:10.746637 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:11.747535 kubelet[1380]: E1002 20:37:11.747451 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:12.679924 kubelet[1380]: W1002 20:37:12.679835 1380 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod886eee00_26df_4d0e_9667_d087c5f868c9.slice/cri-containerd-e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999.scope WatchSource:0}: task e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999 not found: not found Oct 2 20:37:12.748894 kubelet[1380]: E1002 20:37:12.748807 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:13.749771 kubelet[1380]: E1002 20:37:13.749610 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:14.750561 kubelet[1380]: E1002 20:37:14.750408 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:15.691321 kubelet[1380]: E1002 20:37:15.691273 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:37:15.750744 kubelet[1380]: E1002 20:37:15.750613 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:16.751575 kubelet[1380]: E1002 20:37:16.751471 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:17.752008 kubelet[1380]: E1002 20:37:17.751959 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:18.753047 kubelet[1380]: E1002 20:37:18.752901 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:19.754125 kubelet[1380]: E1002 20:37:19.754047 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:20.694493 kubelet[1380]: E1002 20:37:20.694353 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:37:20.754311 kubelet[1380]: E1002 20:37:20.754232 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:21.755207 kubelet[1380]: E1002 20:37:21.755075 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:22.755981 kubelet[1380]: E1002 20:37:22.755907 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:23.682323 env[1057]: time="2023-10-02T20:37:23.682117020Z" level=info msg="CreateContainer within sandbox \"eac0e453ae18109b280d362aea6f296a61b3b0e49f94615825aac4f33d0808cc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 20:37:23.711460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3002175967.mount: Deactivated successfully. Oct 2 20:37:23.724603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount758190675.mount: Deactivated successfully. Oct 2 20:37:23.730323 env[1057]: time="2023-10-02T20:37:23.730241128Z" level=info msg="CreateContainer within sandbox \"eac0e453ae18109b280d362aea6f296a61b3b0e49f94615825aac4f33d0808cc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7\"" Oct 2 20:37:23.732762 env[1057]: time="2023-10-02T20:37:23.732680493Z" level=info msg="StartContainer for \"b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7\"" Oct 2 20:37:23.763699 kubelet[1380]: E1002 20:37:23.761870 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:23.790439 systemd[1]: Started cri-containerd-b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7.scope. Oct 2 20:37:23.812114 systemd[1]: cri-containerd-b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7.scope: Deactivated successfully. Oct 2 20:37:24.128120 env[1057]: time="2023-10-02T20:37:24.127292140Z" level=info msg="shim disconnected" id=b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7 Oct 2 20:37:24.128949 env[1057]: time="2023-10-02T20:37:24.128880459Z" level=warning msg="cleaning up after shim disconnected" id=b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7 namespace=k8s.io Oct 2 20:37:24.129174 env[1057]: time="2023-10-02T20:37:24.129096494Z" level=info msg="cleaning up dead shim" Oct 2 20:37:24.153957 env[1057]: time="2023-10-02T20:37:24.153824456Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:37:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2198 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:37:24Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:37:24.154960 env[1057]: time="2023-10-02T20:37:24.154849268Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 20:37:24.158453 env[1057]: time="2023-10-02T20:37:24.158330807Z" level=error msg="Failed to pipe stdout of container \"b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7\"" error="reading from a closed fifo" Oct 2 20:37:24.158604 env[1057]: time="2023-10-02T20:37:24.158480237Z" level=error msg="Failed to pipe stderr of container \"b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7\"" error="reading from a closed fifo" Oct 2 20:37:24.162996 env[1057]: time="2023-10-02T20:37:24.162889847Z" level=error msg="StartContainer for \"b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:37:24.164341 kubelet[1380]: E1002 20:37:24.163413 1380 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7" Oct 2 20:37:24.164341 kubelet[1380]: E1002 20:37:24.163609 1380 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:37:24.164341 kubelet[1380]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:37:24.164341 kubelet[1380]: rm /hostbin/cilium-mount Oct 2 20:37:24.164341 kubelet[1380]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-cdkcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-mtb4m_kube-system(886eee00-26df-4d0e-9667-d087c5f868c9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:37:24.164341 kubelet[1380]: E1002 20:37:24.163706 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-mtb4m" podUID="886eee00-26df-4d0e-9667-d087c5f868c9" Oct 2 20:37:24.468310 kubelet[1380]: I1002 20:37:24.468268 1380 scope.go:117] "RemoveContainer" containerID="e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999" Oct 2 20:37:24.469104 kubelet[1380]: I1002 20:37:24.469056 1380 scope.go:117] "RemoveContainer" containerID="e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999" Oct 2 20:37:24.472278 env[1057]: time="2023-10-02T20:37:24.472169762Z" level=info msg="RemoveContainer for \"e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999\"" Oct 2 20:37:24.473668 env[1057]: time="2023-10-02T20:37:24.473477174Z" level=info msg="RemoveContainer for \"e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999\"" Oct 2 20:37:24.474207 env[1057]: time="2023-10-02T20:37:24.473838592Z" level=error msg="RemoveContainer for \"e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999\" failed" error="failed to set removing state for container \"e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999\": container is already in removing state" Oct 2 20:37:24.474804 kubelet[1380]: E1002 20:37:24.474738 1380 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999\": container is already in removing state" containerID="e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999" Oct 2 20:37:24.475414 kubelet[1380]: E1002 20:37:24.475281 1380 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999": container is already in removing state; Skipping pod "cilium-mtb4m_kube-system(886eee00-26df-4d0e-9667-d087c5f868c9)" Oct 2 20:37:24.476208 kubelet[1380]: E1002 20:37:24.475956 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-mtb4m_kube-system(886eee00-26df-4d0e-9667-d087c5f868c9)\"" pod="kube-system/cilium-mtb4m" podUID="886eee00-26df-4d0e-9667-d087c5f868c9" Oct 2 20:37:24.480310 env[1057]: time="2023-10-02T20:37:24.480184021Z" level=info msg="RemoveContainer for \"e8b6a653bd0bba12845871e375222126a6cd7b219f1a169f017536b78f089999\" returns successfully" Oct 2 20:37:24.703799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7-rootfs.mount: Deactivated successfully. Oct 2 20:37:24.763276 kubelet[1380]: E1002 20:37:24.763054 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:25.696073 kubelet[1380]: E1002 20:37:25.695985 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:37:25.764272 kubelet[1380]: E1002 20:37:25.764228 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:26.764961 kubelet[1380]: E1002 20:37:26.764868 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:27.236214 kubelet[1380]: W1002 20:37:27.236058 1380 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod886eee00_26df_4d0e_9667_d087c5f868c9.slice/cri-containerd-b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7.scope WatchSource:0}: task b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7 not found: not found Oct 2 20:37:27.765891 kubelet[1380]: E1002 20:37:27.765818 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:28.766208 kubelet[1380]: E1002 20:37:28.766083 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:29.767124 kubelet[1380]: E1002 20:37:29.767059 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:30.487110 kubelet[1380]: E1002 20:37:30.487025 1380 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:30.697544 kubelet[1380]: E1002 20:37:30.697501 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:37:30.767416 kubelet[1380]: E1002 20:37:30.767277 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:31.768746 kubelet[1380]: E1002 20:37:31.768687 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:32.770199 kubelet[1380]: E1002 20:37:32.770120 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:33.771527 kubelet[1380]: E1002 20:37:33.771429 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:34.772716 kubelet[1380]: E1002 20:37:34.772643 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:35.699232 kubelet[1380]: E1002 20:37:35.699166 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:37:35.773687 kubelet[1380]: E1002 20:37:35.773645 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:36.774510 kubelet[1380]: E1002 20:37:36.774435 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:37.775213 kubelet[1380]: E1002 20:37:37.775105 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:38.677302 kubelet[1380]: E1002 20:37:38.677236 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-mtb4m_kube-system(886eee00-26df-4d0e-9667-d087c5f868c9)\"" pod="kube-system/cilium-mtb4m" podUID="886eee00-26df-4d0e-9667-d087c5f868c9" Oct 2 20:37:38.775998 kubelet[1380]: E1002 20:37:38.775947 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:39.776812 kubelet[1380]: E1002 20:37:39.776739 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:40.701058 kubelet[1380]: E1002 20:37:40.700966 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:37:40.777287 kubelet[1380]: E1002 20:37:40.777166 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:41.778089 kubelet[1380]: E1002 20:37:41.778021 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:42.778407 kubelet[1380]: E1002 20:37:42.778342 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:43.779492 kubelet[1380]: E1002 20:37:43.779338 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:44.780401 kubelet[1380]: E1002 20:37:44.780306 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:45.702766 kubelet[1380]: E1002 20:37:45.702679 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:37:45.780823 kubelet[1380]: E1002 20:37:45.780749 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:46.781431 kubelet[1380]: E1002 20:37:46.781342 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:47.782246 kubelet[1380]: E1002 20:37:47.782089 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:48.783203 kubelet[1380]: E1002 20:37:48.783111 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:49.681250 env[1057]: time="2023-10-02T20:37:49.681097382Z" level=info msg="CreateContainer within sandbox \"eac0e453ae18109b280d362aea6f296a61b3b0e49f94615825aac4f33d0808cc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 20:37:49.702518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1378019329.mount: Deactivated successfully. Oct 2 20:37:49.715783 env[1057]: time="2023-10-02T20:37:49.715686386Z" level=info msg="CreateContainer within sandbox \"eac0e453ae18109b280d362aea6f296a61b3b0e49f94615825aac4f33d0808cc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"f0a175adaae07a3257938677c289bf7a18837acf3e1be683574e640a36e6109a\"" Oct 2 20:37:49.717695 env[1057]: time="2023-10-02T20:37:49.717609323Z" level=info msg="StartContainer for \"f0a175adaae07a3257938677c289bf7a18837acf3e1be683574e640a36e6109a\"" Oct 2 20:37:49.776933 systemd[1]: Started cri-containerd-f0a175adaae07a3257938677c289bf7a18837acf3e1be683574e640a36e6109a.scope. Oct 2 20:37:49.783708 kubelet[1380]: E1002 20:37:49.783314 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:49.808639 systemd[1]: cri-containerd-f0a175adaae07a3257938677c289bf7a18837acf3e1be683574e640a36e6109a.scope: Deactivated successfully. Oct 2 20:37:49.821714 env[1057]: time="2023-10-02T20:37:49.821663466Z" level=info msg="shim disconnected" id=f0a175adaae07a3257938677c289bf7a18837acf3e1be683574e640a36e6109a Oct 2 20:37:49.821987 env[1057]: time="2023-10-02T20:37:49.821958119Z" level=warning msg="cleaning up after shim disconnected" id=f0a175adaae07a3257938677c289bf7a18837acf3e1be683574e640a36e6109a namespace=k8s.io Oct 2 20:37:49.822063 env[1057]: time="2023-10-02T20:37:49.822049861Z" level=info msg="cleaning up dead shim" Oct 2 20:37:49.834894 env[1057]: time="2023-10-02T20:37:49.834867828Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:37:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2241 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:37:49Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f0a175adaae07a3257938677c289bf7a18837acf3e1be683574e640a36e6109a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:37:49.835238 env[1057]: time="2023-10-02T20:37:49.835185383Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 20:37:49.836270 env[1057]: time="2023-10-02T20:37:49.836205797Z" level=error msg="Failed to pipe stdout of container \"f0a175adaae07a3257938677c289bf7a18837acf3e1be683574e640a36e6109a\"" error="reading from a closed fifo" Oct 2 20:37:49.836322 env[1057]: time="2023-10-02T20:37:49.836282251Z" level=error msg="Failed to pipe stderr of container \"f0a175adaae07a3257938677c289bf7a18837acf3e1be683574e640a36e6109a\"" error="reading from a closed fifo" Oct 2 20:37:49.838028 env[1057]: time="2023-10-02T20:37:49.837980015Z" level=error msg="StartContainer for \"f0a175adaae07a3257938677c289bf7a18837acf3e1be683574e640a36e6109a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:37:49.838885 kubelet[1380]: E1002 20:37:49.838271 1380 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f0a175adaae07a3257938677c289bf7a18837acf3e1be683574e640a36e6109a" Oct 2 20:37:49.838885 kubelet[1380]: E1002 20:37:49.838427 1380 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:37:49.838885 kubelet[1380]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:37:49.838885 kubelet[1380]: rm /hostbin/cilium-mount Oct 2 20:37:49.838885 kubelet[1380]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-cdkcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-mtb4m_kube-system(886eee00-26df-4d0e-9667-d087c5f868c9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:37:49.838885 kubelet[1380]: E1002 20:37:49.838474 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-mtb4m" podUID="886eee00-26df-4d0e-9667-d087c5f868c9" Oct 2 20:37:50.486323 kubelet[1380]: E1002 20:37:50.486202 1380 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:50.555849 kubelet[1380]: I1002 20:37:50.555093 1380 scope.go:117] "RemoveContainer" containerID="b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7" Oct 2 20:37:50.555849 kubelet[1380]: I1002 20:37:50.555693 1380 scope.go:117] "RemoveContainer" containerID="b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7" Oct 2 20:37:50.559205 env[1057]: time="2023-10-02T20:37:50.559051317Z" level=info msg="RemoveContainer for \"b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7\"" Oct 2 20:37:50.560234 env[1057]: time="2023-10-02T20:37:50.560114571Z" level=info msg="RemoveContainer for \"b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7\"" Oct 2 20:37:50.562855 env[1057]: time="2023-10-02T20:37:50.562512047Z" level=error msg="RemoveContainer for \"b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7\" failed" error="failed to set removing state for container \"b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7\": container is already in removing state" Oct 2 20:37:50.563583 kubelet[1380]: E1002 20:37:50.562925 1380 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7\": container is already in removing state" containerID="b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7" Oct 2 20:37:50.563583 kubelet[1380]: E1002 20:37:50.562987 1380 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7": container is already in removing state; Skipping pod "cilium-mtb4m_kube-system(886eee00-26df-4d0e-9667-d087c5f868c9)" Oct 2 20:37:50.563787 kubelet[1380]: E1002 20:37:50.563604 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-mtb4m_kube-system(886eee00-26df-4d0e-9667-d087c5f868c9)\"" pod="kube-system/cilium-mtb4m" podUID="886eee00-26df-4d0e-9667-d087c5f868c9" Oct 2 20:37:50.570167 env[1057]: time="2023-10-02T20:37:50.569977567Z" level=info msg="RemoveContainer for \"b1dce2c3b164a8cdc818b689972ad111dea294fdfba740c789acdfcb9ec9eca7\" returns successfully" Oct 2 20:37:50.697043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0a175adaae07a3257938677c289bf7a18837acf3e1be683574e640a36e6109a-rootfs.mount: Deactivated successfully. Oct 2 20:37:50.704623 kubelet[1380]: E1002 20:37:50.704528 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:37:50.784355 kubelet[1380]: E1002 20:37:50.784155 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:51.784400 kubelet[1380]: E1002 20:37:51.784303 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:52.785741 kubelet[1380]: E1002 20:37:52.785677 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:52.939490 kubelet[1380]: W1002 20:37:52.939407 1380 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod886eee00_26df_4d0e_9667_d087c5f868c9.slice/cri-containerd-f0a175adaae07a3257938677c289bf7a18837acf3e1be683574e640a36e6109a.scope WatchSource:0}: task f0a175adaae07a3257938677c289bf7a18837acf3e1be683574e640a36e6109a not found: not found Oct 2 20:37:53.786222 kubelet[1380]: E1002 20:37:53.785969 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:54.786245 kubelet[1380]: E1002 20:37:54.786196 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:55.706312 kubelet[1380]: E1002 20:37:55.706265 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:37:55.787840 kubelet[1380]: E1002 20:37:55.787802 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:56.788760 kubelet[1380]: E1002 20:37:56.788635 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:57.789270 kubelet[1380]: E1002 20:37:57.789219 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:58.790774 kubelet[1380]: E1002 20:37:58.790710 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:37:59.791944 kubelet[1380]: E1002 20:37:59.791875 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:38:00.708010 kubelet[1380]: E1002 20:38:00.707958 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:38:00.792727 kubelet[1380]: E1002 20:38:00.792689 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:38:01.793578 kubelet[1380]: E1002 20:38:01.793497 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:38:02.794539 kubelet[1380]: E1002 20:38:02.794489 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:38:03.677512 kubelet[1380]: E1002 20:38:03.677392 1380 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-mtb4m_kube-system(886eee00-26df-4d0e-9667-d087c5f868c9)\"" pod="kube-system/cilium-mtb4m" podUID="886eee00-26df-4d0e-9667-d087c5f868c9" Oct 2 20:38:03.796397 kubelet[1380]: E1002 20:38:03.796352 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:38:04.541397 env[1057]: time="2023-10-02T20:38:04.541117118Z" level=info msg="StopContainer for \"277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be\" with timeout 30 (s)" Oct 2 20:38:04.542451 env[1057]: time="2023-10-02T20:38:04.542329682Z" level=info msg="Stop container \"277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be\" with signal terminated" Oct 2 20:38:04.565037 systemd[1]: cri-containerd-277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be.scope: Deactivated successfully. Oct 2 20:38:04.563000 audit: BPF prog-id=86 op=UNLOAD Oct 2 20:38:04.568876 kernel: kauditd_printk_skb: 166 callbacks suppressed Oct 2 20:38:04.569087 kernel: audit: type=1334 audit(1696279084.563:713): prog-id=86 op=UNLOAD Oct 2 20:38:04.573000 audit: BPF prog-id=89 op=UNLOAD Oct 2 20:38:04.578234 kernel: audit: type=1334 audit(1696279084.573:714): prog-id=89 op=UNLOAD Oct 2 20:38:04.601627 env[1057]: time="2023-10-02T20:38:04.601432679Z" level=info msg="StopPodSandbox for \"eac0e453ae18109b280d362aea6f296a61b3b0e49f94615825aac4f33d0808cc\"" Oct 2 20:38:04.601627 env[1057]: time="2023-10-02T20:38:04.601626222Z" level=info msg="Container to stop \"f0a175adaae07a3257938677c289bf7a18837acf3e1be683574e640a36e6109a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:38:04.605483 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eac0e453ae18109b280d362aea6f296a61b3b0e49f94615825aac4f33d0808cc-shm.mount: Deactivated successfully. Oct 2 20:38:04.625664 systemd[1]: cri-containerd-eac0e453ae18109b280d362aea6f296a61b3b0e49f94615825aac4f33d0808cc.scope: Deactivated successfully. Oct 2 20:38:04.625000 audit: BPF prog-id=82 op=UNLOAD Oct 2 20:38:04.631185 kernel: audit: type=1334 audit(1696279084.625:715): prog-id=82 op=UNLOAD Oct 2 20:38:04.632000 audit: BPF prog-id=85 op=UNLOAD Oct 2 20:38:04.638203 kernel: audit: type=1334 audit(1696279084.632:716): prog-id=85 op=UNLOAD Oct 2 20:38:04.647057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be-rootfs.mount: Deactivated successfully. Oct 2 20:38:04.658664 env[1057]: time="2023-10-02T20:38:04.658607970Z" level=info msg="shim disconnected" id=277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be Oct 2 20:38:04.658664 env[1057]: time="2023-10-02T20:38:04.658662392Z" level=warning msg="cleaning up after shim disconnected" id=277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be namespace=k8s.io Oct 2 20:38:04.658870 env[1057]: time="2023-10-02T20:38:04.658674825Z" level=info msg="cleaning up dead shim" Oct 2 20:38:04.670285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eac0e453ae18109b280d362aea6f296a61b3b0e49f94615825aac4f33d0808cc-rootfs.mount: Deactivated successfully. Oct 2 20:38:04.674762 env[1057]: time="2023-10-02T20:38:04.674712812Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:38:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2287 runtime=io.containerd.runc.v2\n" Oct 2 20:38:04.678191 env[1057]: time="2023-10-02T20:38:04.678151259Z" level=info msg="shim disconnected" id=eac0e453ae18109b280d362aea6f296a61b3b0e49f94615825aac4f33d0808cc Oct 2 20:38:04.678256 env[1057]: time="2023-10-02T20:38:04.678188620Z" level=warning msg="cleaning up after shim disconnected" id=eac0e453ae18109b280d362aea6f296a61b3b0e49f94615825aac4f33d0808cc namespace=k8s.io Oct 2 20:38:04.678256 env[1057]: time="2023-10-02T20:38:04.678200652Z" level=info msg="cleaning up dead shim" Oct 2 20:38:04.681558 env[1057]: time="2023-10-02T20:38:04.681522432Z" level=info msg="StopContainer for \"277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be\" returns successfully" Oct 2 20:38:04.682226 env[1057]: time="2023-10-02T20:38:04.682204801Z" level=info msg="StopPodSandbox for \"b7e78c214026e4a86970acef74be8488b5c2826235fc1a0dc210625c9fb0a360\"" Oct 2 20:38:04.682372 env[1057]: time="2023-10-02T20:38:04.682350805Z" level=info msg="Container to stop \"277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:38:04.683861 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b7e78c214026e4a86970acef74be8488b5c2826235fc1a0dc210625c9fb0a360-shm.mount: Deactivated successfully. Oct 2 20:38:04.691206 env[1057]: time="2023-10-02T20:38:04.691151429Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:38:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2306 runtime=io.containerd.runc.v2\n" Oct 2 20:38:04.691538 env[1057]: time="2023-10-02T20:38:04.691509230Z" level=info msg="TearDown network for sandbox \"eac0e453ae18109b280d362aea6f296a61b3b0e49f94615825aac4f33d0808cc\" successfully" Oct 2 20:38:04.691586 env[1057]: time="2023-10-02T20:38:04.691537363Z" level=info msg="StopPodSandbox for \"eac0e453ae18109b280d362aea6f296a61b3b0e49f94615825aac4f33d0808cc\" returns successfully" Oct 2 20:38:04.697896 systemd[1]: cri-containerd-b7e78c214026e4a86970acef74be8488b5c2826235fc1a0dc210625c9fb0a360.scope: Deactivated successfully. Oct 2 20:38:04.697000 audit: BPF prog-id=78 op=UNLOAD Oct 2 20:38:04.701147 kernel: audit: type=1334 audit(1696279084.697:717): prog-id=78 op=UNLOAD Oct 2 20:38:04.701000 audit: BPF prog-id=81 op=UNLOAD Oct 2 20:38:04.704185 kernel: audit: type=1334 audit(1696279084.701:718): prog-id=81 op=UNLOAD Oct 2 20:38:04.727816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7e78c214026e4a86970acef74be8488b5c2826235fc1a0dc210625c9fb0a360-rootfs.mount: Deactivated successfully. Oct 2 20:38:04.739617 env[1057]: time="2023-10-02T20:38:04.739524163Z" level=info msg="shim disconnected" id=b7e78c214026e4a86970acef74be8488b5c2826235fc1a0dc210625c9fb0a360 Oct 2 20:38:04.739617 env[1057]: time="2023-10-02T20:38:04.739600006Z" level=warning msg="cleaning up after shim disconnected" id=b7e78c214026e4a86970acef74be8488b5c2826235fc1a0dc210625c9fb0a360 namespace=k8s.io Oct 2 20:38:04.739617 env[1057]: time="2023-10-02T20:38:04.739618400Z" level=info msg="cleaning up dead shim" Oct 2 20:38:04.749961 env[1057]: time="2023-10-02T20:38:04.749911503Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:38:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2337 runtime=io.containerd.runc.v2\n" Oct 2 20:38:04.750467 env[1057]: time="2023-10-02T20:38:04.750443319Z" level=info msg="TearDown network for sandbox \"b7e78c214026e4a86970acef74be8488b5c2826235fc1a0dc210625c9fb0a360\" successfully" Oct 2 20:38:04.750561 env[1057]: time="2023-10-02T20:38:04.750542625Z" level=info msg="StopPodSandbox for \"b7e78c214026e4a86970acef74be8488b5c2826235fc1a0dc210625c9fb0a360\" returns successfully" Oct 2 20:38:04.765828 kubelet[1380]: I1002 20:38:04.765807 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/886eee00-26df-4d0e-9667-d087c5f868c9-hubble-tls\") pod \"886eee00-26df-4d0e-9667-d087c5f868c9\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " Oct 2 20:38:04.766150 kubelet[1380]: I1002 20:38:04.766006 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-etc-cni-netd\") pod \"886eee00-26df-4d0e-9667-d087c5f868c9\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " Oct 2 20:38:04.766150 kubelet[1380]: I1002 20:38:04.766035 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-hostproc\") pod \"886eee00-26df-4d0e-9667-d087c5f868c9\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " Oct 2 20:38:04.766150 kubelet[1380]: I1002 20:38:04.766056 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-lib-modules\") pod \"886eee00-26df-4d0e-9667-d087c5f868c9\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " Oct 2 20:38:04.766150 kubelet[1380]: I1002 20:38:04.766077 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-host-proc-sys-net\") pod \"886eee00-26df-4d0e-9667-d087c5f868c9\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " Oct 2 20:38:04.766150 kubelet[1380]: I1002 20:38:04.766103 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjpxx\" (UniqueName: \"kubernetes.io/projected/4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9-kube-api-access-wjpxx\") pod \"4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9\" (UID: \"4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9\") " Oct 2 20:38:04.766314 kubelet[1380]: I1002 20:38:04.766237 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-hostproc" (OuterVolumeSpecName: "hostproc") pod "886eee00-26df-4d0e-9667-d087c5f868c9" (UID: "886eee00-26df-4d0e-9667-d087c5f868c9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:38:04.766964 kubelet[1380]: I1002 20:38:04.766361 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-xtables-lock\") pod \"886eee00-26df-4d0e-9667-d087c5f868c9\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " Oct 2 20:38:04.766964 kubelet[1380]: I1002 20:38:04.766398 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/886eee00-26df-4d0e-9667-d087c5f868c9-cilium-config-path\") pod \"886eee00-26df-4d0e-9667-d087c5f868c9\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " Oct 2 20:38:04.766964 kubelet[1380]: I1002 20:38:04.766423 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/886eee00-26df-4d0e-9667-d087c5f868c9-clustermesh-secrets\") pod \"886eee00-26df-4d0e-9667-d087c5f868c9\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " Oct 2 20:38:04.766964 kubelet[1380]: I1002 20:38:04.766448 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/886eee00-26df-4d0e-9667-d087c5f868c9-cilium-ipsec-secrets\") pod \"886eee00-26df-4d0e-9667-d087c5f868c9\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " Oct 2 20:38:04.766964 kubelet[1380]: I1002 20:38:04.766469 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-bpf-maps\") pod \"886eee00-26df-4d0e-9667-d087c5f868c9\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " Oct 2 20:38:04.766964 kubelet[1380]: I1002 20:38:04.766493 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdkcq\" (UniqueName: \"kubernetes.io/projected/886eee00-26df-4d0e-9667-d087c5f868c9-kube-api-access-cdkcq\") pod \"886eee00-26df-4d0e-9667-d087c5f868c9\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " Oct 2 20:38:04.766964 kubelet[1380]: I1002 20:38:04.766514 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-cilium-run\") pod \"886eee00-26df-4d0e-9667-d087c5f868c9\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " Oct 2 20:38:04.766964 kubelet[1380]: I1002 20:38:04.766535 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-cni-path\") pod \"886eee00-26df-4d0e-9667-d087c5f868c9\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " Oct 2 20:38:04.766964 kubelet[1380]: I1002 20:38:04.766557 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-host-proc-sys-kernel\") pod \"886eee00-26df-4d0e-9667-d087c5f868c9\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " Oct 2 20:38:04.766964 kubelet[1380]: I1002 20:38:04.766580 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9-cilium-config-path\") pod \"4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9\" (UID: \"4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9\") " Oct 2 20:38:04.766964 kubelet[1380]: I1002 20:38:04.766600 1380 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-cilium-cgroup\") pod \"886eee00-26df-4d0e-9667-d087c5f868c9\" (UID: \"886eee00-26df-4d0e-9667-d087c5f868c9\") " Oct 2 20:38:04.766964 kubelet[1380]: I1002 20:38:04.766622 1380 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-hostproc\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:38:04.766964 kubelet[1380]: I1002 20:38:04.766645 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "886eee00-26df-4d0e-9667-d087c5f868c9" (UID: "886eee00-26df-4d0e-9667-d087c5f868c9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:38:04.766964 kubelet[1380]: I1002 20:38:04.766668 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "886eee00-26df-4d0e-9667-d087c5f868c9" (UID: "886eee00-26df-4d0e-9667-d087c5f868c9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:38:04.766964 kubelet[1380]: I1002 20:38:04.766685 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "886eee00-26df-4d0e-9667-d087c5f868c9" (UID: "886eee00-26df-4d0e-9667-d087c5f868c9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:38:04.766964 kubelet[1380]: I1002 20:38:04.766828 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "886eee00-26df-4d0e-9667-d087c5f868c9" (UID: "886eee00-26df-4d0e-9667-d087c5f868c9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:38:04.767420 kubelet[1380]: I1002 20:38:04.766858 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "886eee00-26df-4d0e-9667-d087c5f868c9" (UID: "886eee00-26df-4d0e-9667-d087c5f868c9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:38:04.767420 kubelet[1380]: I1002 20:38:04.766894 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "886eee00-26df-4d0e-9667-d087c5f868c9" (UID: "886eee00-26df-4d0e-9667-d087c5f868c9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:38:04.769870 kubelet[1380]: I1002 20:38:04.769788 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-cni-path" (OuterVolumeSpecName: "cni-path") pod "886eee00-26df-4d0e-9667-d087c5f868c9" (UID: "886eee00-26df-4d0e-9667-d087c5f868c9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:38:04.769870 kubelet[1380]: I1002 20:38:04.769826 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "886eee00-26df-4d0e-9667-d087c5f868c9" (UID: "886eee00-26df-4d0e-9667-d087c5f868c9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:38:04.769870 kubelet[1380]: I1002 20:38:04.769848 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "886eee00-26df-4d0e-9667-d087c5f868c9" (UID: "886eee00-26df-4d0e-9667-d087c5f868c9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:38:04.772336 kubelet[1380]: I1002 20:38:04.772304 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9" (UID: "4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:38:04.773748 kubelet[1380]: I1002 20:38:04.773690 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886eee00-26df-4d0e-9667-d087c5f868c9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "886eee00-26df-4d0e-9667-d087c5f868c9" (UID: "886eee00-26df-4d0e-9667-d087c5f868c9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:38:04.774148 kubelet[1380]: I1002 20:38:04.774085 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9-kube-api-access-wjpxx" (OuterVolumeSpecName: "kube-api-access-wjpxx") pod "4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9" (UID: "4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9"). InnerVolumeSpecName "kube-api-access-wjpxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:38:04.774276 kubelet[1380]: I1002 20:38:04.774256 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886eee00-26df-4d0e-9667-d087c5f868c9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "886eee00-26df-4d0e-9667-d087c5f868c9" (UID: "886eee00-26df-4d0e-9667-d087c5f868c9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:38:04.776487 kubelet[1380]: I1002 20:38:04.776452 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/886eee00-26df-4d0e-9667-d087c5f868c9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "886eee00-26df-4d0e-9667-d087c5f868c9" (UID: "886eee00-26df-4d0e-9667-d087c5f868c9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:38:04.777776 kubelet[1380]: I1002 20:38:04.777741 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886eee00-26df-4d0e-9667-d087c5f868c9-kube-api-access-cdkcq" (OuterVolumeSpecName: "kube-api-access-cdkcq") pod "886eee00-26df-4d0e-9667-d087c5f868c9" (UID: "886eee00-26df-4d0e-9667-d087c5f868c9"). InnerVolumeSpecName "kube-api-access-cdkcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:38:04.778672 kubelet[1380]: I1002 20:38:04.778643 1380 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/886eee00-26df-4d0e-9667-d087c5f868c9-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "886eee00-26df-4d0e-9667-d087c5f868c9" (UID: "886eee00-26df-4d0e-9667-d087c5f868c9"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:38:04.800504 kubelet[1380]: E1002 20:38:04.797871 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:38:04.867602 kubelet[1380]: I1002 20:38:04.867544 1380 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/886eee00-26df-4d0e-9667-d087c5f868c9-clustermesh-secrets\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:38:04.867602 kubelet[1380]: I1002 20:38:04.867606 1380 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-host-proc-sys-net\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:38:04.867841 kubelet[1380]: I1002 20:38:04.867642 1380 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wjpxx\" (UniqueName: \"kubernetes.io/projected/4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9-kube-api-access-wjpxx\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:38:04.867841 kubelet[1380]: I1002 20:38:04.867671 1380 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-xtables-lock\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:38:04.867841 kubelet[1380]: I1002 20:38:04.867700 1380 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/886eee00-26df-4d0e-9667-d087c5f868c9-cilium-config-path\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:38:04.867841 kubelet[1380]: I1002 20:38:04.867729 1380 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/886eee00-26df-4d0e-9667-d087c5f868c9-cilium-ipsec-secrets\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:38:04.867841 kubelet[1380]: I1002 20:38:04.867756 1380 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-bpf-maps\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:38:04.867841 kubelet[1380]: I1002 20:38:04.867782 1380 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-cilium-cgroup\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:38:04.867841 kubelet[1380]: I1002 20:38:04.867810 1380 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cdkcq\" (UniqueName: \"kubernetes.io/projected/886eee00-26df-4d0e-9667-d087c5f868c9-kube-api-access-cdkcq\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:38:04.868363 kubelet[1380]: I1002 20:38:04.867865 1380 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-cilium-run\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:38:04.868363 kubelet[1380]: I1002 20:38:04.867892 1380 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-cni-path\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:38:04.868363 kubelet[1380]: I1002 20:38:04.867921 1380 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-host-proc-sys-kernel\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:38:04.868363 kubelet[1380]: I1002 20:38:04.867949 1380 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9-cilium-config-path\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:38:04.868363 kubelet[1380]: I1002 20:38:04.867975 1380 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-lib-modules\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:38:04.868363 kubelet[1380]: I1002 20:38:04.868001 1380 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/886eee00-26df-4d0e-9667-d087c5f868c9-hubble-tls\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:38:04.868363 kubelet[1380]: I1002 20:38:04.868028 1380 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/886eee00-26df-4d0e-9667-d087c5f868c9-etc-cni-netd\") on node \"172.24.4.121\" DevicePath \"\"" Oct 2 20:38:05.604827 systemd[1]: var-lib-kubelet-pods-886eee00\x2d26df\x2d4d0e\x2d9667\x2dd087c5f868c9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:38:05.605064 systemd[1]: var-lib-kubelet-pods-886eee00\x2d26df\x2d4d0e\x2d9667\x2dd087c5f868c9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:38:05.605281 systemd[1]: var-lib-kubelet-pods-886eee00\x2d26df\x2d4d0e\x2d9667\x2dd087c5f868c9-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 20:38:05.605437 systemd[1]: var-lib-kubelet-pods-4becc7a7\x2da1b0\x2d4f3d\x2d9f5a\x2d6ffc2b6648b9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwjpxx.mount: Deactivated successfully. Oct 2 20:38:05.605609 systemd[1]: var-lib-kubelet-pods-886eee00\x2d26df\x2d4d0e\x2d9667\x2dd087c5f868c9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcdkcq.mount: Deactivated successfully. Oct 2 20:38:05.616370 kubelet[1380]: I1002 20:38:05.616330 1380 scope.go:117] "RemoveContainer" containerID="277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be" Oct 2 20:38:05.618688 env[1057]: time="2023-10-02T20:38:05.618616134Z" level=info msg="RemoveContainer for \"277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be\"" Oct 2 20:38:05.623738 env[1057]: time="2023-10-02T20:38:05.623584802Z" level=info msg="RemoveContainer for \"277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be\" returns successfully" Oct 2 20:38:05.627367 kubelet[1380]: I1002 20:38:05.624994 1380 scope.go:117] "RemoveContainer" containerID="277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be" Oct 2 20:38:05.626724 systemd[1]: Removed slice kubepods-besteffort-pod4becc7a7_a1b0_4f3d_9f5a_6ffc2b6648b9.slice. Oct 2 20:38:05.627708 env[1057]: time="2023-10-02T20:38:05.625809525Z" level=error msg="ContainerStatus for \"277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be\": not found" Oct 2 20:38:05.628045 kubelet[1380]: E1002 20:38:05.627999 1380 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be\": not found" containerID="277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be" Oct 2 20:38:05.628249 kubelet[1380]: I1002 20:38:05.628226 1380 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be"} err="failed to get container status \"277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be\": rpc error: code = NotFound desc = an error occurred when try to find container \"277cec644e181c20012124b904a202ffca8b85f67614745978057683754099be\": not found" Oct 2 20:38:05.628354 kubelet[1380]: I1002 20:38:05.628265 1380 scope.go:117] "RemoveContainer" containerID="f0a175adaae07a3257938677c289bf7a18837acf3e1be683574e640a36e6109a" Oct 2 20:38:05.634422 env[1057]: time="2023-10-02T20:38:05.633505887Z" level=info msg="RemoveContainer for \"f0a175adaae07a3257938677c289bf7a18837acf3e1be683574e640a36e6109a\"" Oct 2 20:38:05.638234 env[1057]: time="2023-10-02T20:38:05.638177638Z" level=info msg="RemoveContainer for \"f0a175adaae07a3257938677c289bf7a18837acf3e1be683574e640a36e6109a\" returns successfully" Oct 2 20:38:05.651090 systemd[1]: Removed slice kubepods-burstable-pod886eee00_26df_4d0e_9667_d087c5f868c9.slice. Oct 2 20:38:05.710101 kubelet[1380]: E1002 20:38:05.710006 1380 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:38:05.798597 kubelet[1380]: E1002 20:38:05.798540 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:38:06.680953 kubelet[1380]: I1002 20:38:06.680869 1380 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9" path="/var/lib/kubelet/pods/4becc7a7-a1b0-4f3d-9f5a-6ffc2b6648b9/volumes" Oct 2 20:38:06.681955 kubelet[1380]: I1002 20:38:06.681902 1380 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="886eee00-26df-4d0e-9667-d087c5f868c9" path="/var/lib/kubelet/pods/886eee00-26df-4d0e-9667-d087c5f868c9/volumes" Oct 2 20:38:06.799364 kubelet[1380]: E1002 20:38:06.799324 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:38:07.800376 kubelet[1380]: E1002 20:38:07.800240 1380 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"