Oct 2 20:13:55.067170 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 20:13:55.067191 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 20:13:55.067204 kernel: BIOS-provided physical RAM map: Oct 2 20:13:55.067211 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 2 20:13:55.067218 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 2 20:13:55.067225 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 2 20:13:55.067233 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Oct 2 20:13:55.067240 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Oct 2 20:13:55.067249 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 2 20:13:55.067255 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 2 20:13:55.067262 kernel: NX (Execute Disable) protection: active Oct 2 20:13:55.067269 kernel: SMBIOS 2.8 present. Oct 2 20:13:55.067275 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Oct 2 20:13:55.067282 kernel: Hypervisor detected: KVM Oct 2 20:13:55.067291 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 2 20:13:55.067300 kernel: kvm-clock: cpu 0, msr 30f8a001, primary cpu clock Oct 2 20:13:55.067307 kernel: kvm-clock: using sched offset of 6118752407 cycles Oct 2 20:13:55.067315 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 2 20:13:55.067322 kernel: tsc: Detected 1996.249 MHz processor Oct 2 20:13:55.067330 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 20:13:55.067337 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 20:13:55.067345 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Oct 2 20:13:55.067352 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 20:13:55.067362 kernel: ACPI: Early table checksum verification disabled Oct 2 20:13:55.067369 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Oct 2 20:13:55.067377 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 20:13:55.067384 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 20:13:55.067392 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 20:13:55.067399 kernel: ACPI: FACS 0x000000007FFE0000 000040 Oct 2 20:13:55.067406 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 20:13:55.067414 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 20:13:55.067421 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Oct 2 20:13:55.067430 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Oct 2 20:13:55.067438 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Oct 2 20:13:55.067445 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Oct 2 20:13:55.067452 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Oct 2 20:13:55.067459 kernel: No NUMA configuration found Oct 2 20:13:55.067467 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Oct 2 20:13:55.067474 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Oct 2 20:13:55.067482 kernel: Zone ranges: Oct 2 20:13:55.067494 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 20:13:55.067502 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Oct 2 20:13:55.067510 kernel: Normal empty Oct 2 20:13:55.067518 kernel: Movable zone start for each node Oct 2 20:13:55.067525 kernel: Early memory node ranges Oct 2 20:13:55.067533 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 2 20:13:55.067542 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Oct 2 20:13:55.067550 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Oct 2 20:13:55.067558 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 20:13:55.067565 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 2 20:13:55.071608 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Oct 2 20:13:55.071617 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 2 20:13:55.071625 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 2 20:13:55.071633 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 2 20:13:55.071641 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 2 20:13:55.071652 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 2 20:13:55.071660 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 20:13:55.071668 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 2 20:13:55.071676 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 2 20:13:55.071684 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 20:13:55.071691 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 2 20:13:55.071699 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Oct 2 20:13:55.071706 kernel: Booting paravirtualized kernel on KVM Oct 2 20:13:55.071715 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 20:13:55.071722 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Oct 2 20:13:55.071733 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Oct 2 20:13:55.071741 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Oct 2 20:13:55.071748 kernel: pcpu-alloc: [0] 0 1 Oct 2 20:13:55.071756 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Oct 2 20:13:55.071764 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 2 20:13:55.071771 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Oct 2 20:13:55.071779 kernel: Policy zone: DMA32 Oct 2 20:13:55.071788 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 20:13:55.071798 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 20:13:55.071806 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 20:13:55.071813 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 2 20:13:55.071821 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 20:13:55.071829 kernel: Memory: 1975340K/2096620K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 121020K reserved, 0K cma-reserved) Oct 2 20:13:55.071837 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 2 20:13:55.071845 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 20:13:55.071852 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 20:13:55.071863 kernel: rcu: Hierarchical RCU implementation. Oct 2 20:13:55.071871 kernel: rcu: RCU event tracing is enabled. Oct 2 20:13:55.071879 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 2 20:13:55.071887 kernel: Rude variant of Tasks RCU enabled. Oct 2 20:13:55.071895 kernel: Tracing variant of Tasks RCU enabled. Oct 2 20:13:55.071903 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 20:13:55.071910 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 2 20:13:55.071918 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 2 20:13:55.071925 kernel: Console: colour VGA+ 80x25 Oct 2 20:13:55.071936 kernel: printk: console [tty0] enabled Oct 2 20:13:55.071944 kernel: printk: console [ttyS0] enabled Oct 2 20:13:55.071952 kernel: ACPI: Core revision 20210730 Oct 2 20:13:55.071959 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 20:13:55.071967 kernel: x2apic enabled Oct 2 20:13:55.071974 kernel: Switched APIC routing to physical x2apic. Oct 2 20:13:55.071982 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 2 20:13:55.071990 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 2 20:13:55.071998 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Oct 2 20:13:55.072005 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 2 20:13:55.072016 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 2 20:13:55.072024 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 20:13:55.072031 kernel: Spectre V2 : Mitigation: Retpolines Oct 2 20:13:55.072039 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 20:13:55.072047 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 2 20:13:55.072054 kernel: Speculative Store Bypass: Vulnerable Oct 2 20:13:55.072062 kernel: x86/fpu: x87 FPU will use FXSAVE Oct 2 20:13:55.072069 kernel: Freeing SMP alternatives memory: 32K Oct 2 20:13:55.072077 kernel: pid_max: default: 32768 minimum: 301 Oct 2 20:13:55.072086 kernel: LSM: Security Framework initializing Oct 2 20:13:55.072094 kernel: SELinux: Initializing. Oct 2 20:13:55.072101 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 2 20:13:55.072109 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 2 20:13:55.072117 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Oct 2 20:13:55.072125 kernel: Performance Events: AMD PMU driver. Oct 2 20:13:55.072133 kernel: ... version: 0 Oct 2 20:13:55.072140 kernel: ... bit width: 48 Oct 2 20:13:55.072148 kernel: ... generic registers: 4 Oct 2 20:13:55.072167 kernel: ... value mask: 0000ffffffffffff Oct 2 20:13:55.072175 kernel: ... max period: 00007fffffffffff Oct 2 20:13:55.072185 kernel: ... fixed-purpose events: 0 Oct 2 20:13:55.072193 kernel: ... event mask: 000000000000000f Oct 2 20:13:55.072201 kernel: signal: max sigframe size: 1440 Oct 2 20:13:55.072209 kernel: rcu: Hierarchical SRCU implementation. Oct 2 20:13:55.072216 kernel: smp: Bringing up secondary CPUs ... Oct 2 20:13:55.072225 kernel: x86: Booting SMP configuration: Oct 2 20:13:55.072235 kernel: .... node #0, CPUs: #1 Oct 2 20:13:55.072243 kernel: kvm-clock: cpu 1, msr 30f8a041, secondary cpu clock Oct 2 20:13:55.072251 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Oct 2 20:13:55.072259 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 20:13:55.072267 kernel: smpboot: Max logical packages: 2 Oct 2 20:13:55.072275 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Oct 2 20:13:55.072282 kernel: devtmpfs: initialized Oct 2 20:13:55.072290 kernel: x86/mm: Memory block size: 128MB Oct 2 20:13:55.072298 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 20:13:55.072308 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 2 20:13:55.072316 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 20:13:55.072324 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 20:13:55.072332 kernel: audit: initializing netlink subsys (disabled) Oct 2 20:13:55.072340 kernel: audit: type=2000 audit(1696277633.779:1): state=initialized audit_enabled=0 res=1 Oct 2 20:13:55.072348 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 20:13:55.072356 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 20:13:55.072364 kernel: cpuidle: using governor menu Oct 2 20:13:55.072372 kernel: ACPI: bus type PCI registered Oct 2 20:13:55.072382 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 20:13:55.072390 kernel: dca service started, version 1.12.1 Oct 2 20:13:55.072398 kernel: PCI: Using configuration type 1 for base access Oct 2 20:13:55.072406 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 20:13:55.072414 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 20:13:55.072422 kernel: ACPI: Added _OSI(Module Device) Oct 2 20:13:55.072430 kernel: ACPI: Added _OSI(Processor Device) Oct 2 20:13:55.072438 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 20:13:55.072446 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 20:13:55.072456 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 20:13:55.072464 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 20:13:55.072472 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 20:13:55.072480 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 20:13:55.072488 kernel: ACPI: Interpreter enabled Oct 2 20:13:55.072496 kernel: ACPI: PM: (supports S0 S3 S5) Oct 2 20:13:55.072504 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 20:13:55.072512 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 20:13:55.072520 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Oct 2 20:13:55.072529 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 20:13:55.072737 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 2 20:13:55.072838 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Oct 2 20:13:55.072852 kernel: acpiphp: Slot [3] registered Oct 2 20:13:55.072860 kernel: acpiphp: Slot [4] registered Oct 2 20:13:55.072868 kernel: acpiphp: Slot [5] registered Oct 2 20:13:55.072876 kernel: acpiphp: Slot [6] registered Oct 2 20:13:55.072888 kernel: acpiphp: Slot [7] registered Oct 2 20:13:55.072896 kernel: acpiphp: Slot [8] registered Oct 2 20:13:55.072904 kernel: acpiphp: Slot [9] registered Oct 2 20:13:55.072911 kernel: acpiphp: Slot [10] registered Oct 2 20:13:55.072919 kernel: acpiphp: Slot [11] registered Oct 2 20:13:55.072927 kernel: acpiphp: Slot [12] registered Oct 2 20:13:55.072935 kernel: acpiphp: Slot [13] registered Oct 2 20:13:55.072943 kernel: acpiphp: Slot [14] registered Oct 2 20:13:55.072951 kernel: acpiphp: Slot [15] registered Oct 2 20:13:55.072959 kernel: acpiphp: Slot [16] registered Oct 2 20:13:55.072970 kernel: acpiphp: Slot [17] registered Oct 2 20:13:55.072977 kernel: acpiphp: Slot [18] registered Oct 2 20:13:55.072985 kernel: acpiphp: Slot [19] registered Oct 2 20:13:55.072993 kernel: acpiphp: Slot [20] registered Oct 2 20:13:55.073001 kernel: acpiphp: Slot [21] registered Oct 2 20:13:55.073009 kernel: acpiphp: Slot [22] registered Oct 2 20:13:55.073017 kernel: acpiphp: Slot [23] registered Oct 2 20:13:55.073024 kernel: acpiphp: Slot [24] registered Oct 2 20:13:55.073032 kernel: acpiphp: Slot [25] registered Oct 2 20:13:55.073042 kernel: acpiphp: Slot [26] registered Oct 2 20:13:55.073050 kernel: acpiphp: Slot [27] registered Oct 2 20:13:55.073058 kernel: acpiphp: Slot [28] registered Oct 2 20:13:55.073066 kernel: acpiphp: Slot [29] registered Oct 2 20:13:55.073074 kernel: acpiphp: Slot [30] registered Oct 2 20:13:55.073082 kernel: acpiphp: Slot [31] registered Oct 2 20:13:55.073090 kernel: PCI host bridge to bus 0000:00 Oct 2 20:13:55.073195 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 20:13:55.073275 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 2 20:13:55.073357 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 20:13:55.073437 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 2 20:13:55.073523 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Oct 2 20:13:55.073618 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 20:13:55.073721 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 2 20:13:55.073829 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 2 20:13:55.073941 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Oct 2 20:13:55.074038 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Oct 2 20:13:55.074127 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Oct 2 20:13:55.074214 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Oct 2 20:13:55.074301 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Oct 2 20:13:55.074389 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Oct 2 20:13:55.074510 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 2 20:13:55.074616 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Oct 2 20:13:55.074702 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Oct 2 20:13:55.074793 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Oct 2 20:13:55.074876 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Oct 2 20:13:55.074965 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Oct 2 20:13:55.075049 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Oct 2 20:13:55.075136 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Oct 2 20:13:55.075220 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 2 20:13:55.075362 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 2 20:13:55.075451 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Oct 2 20:13:55.075535 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Oct 2 20:13:55.080749 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Oct 2 20:13:55.080858 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Oct 2 20:13:55.081037 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 2 20:13:55.081171 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 2 20:13:55.081276 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Oct 2 20:13:55.081373 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Oct 2 20:13:55.081478 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Oct 2 20:13:55.081612 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Oct 2 20:13:55.081715 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Oct 2 20:13:55.081830 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 20:13:55.081926 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Oct 2 20:13:55.082038 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Oct 2 20:13:55.082058 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 2 20:13:55.082073 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 2 20:13:55.082087 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 2 20:13:55.082101 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 2 20:13:55.082115 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 2 20:13:55.082133 kernel: iommu: Default domain type: Translated Oct 2 20:13:55.082148 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 20:13:55.082248 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Oct 2 20:13:55.082337 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 2 20:13:55.082425 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Oct 2 20:13:55.082439 kernel: vgaarb: loaded Oct 2 20:13:55.082447 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 20:13:55.082458 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 20:13:55.082466 kernel: PTP clock support registered Oct 2 20:13:55.082478 kernel: PCI: Using ACPI for IRQ routing Oct 2 20:13:55.082486 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 20:13:55.082494 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 2 20:13:55.082503 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Oct 2 20:13:55.082511 kernel: clocksource: Switched to clocksource kvm-clock Oct 2 20:13:55.082519 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 20:13:55.082527 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 20:13:55.082535 kernel: pnp: PnP ACPI init Oct 2 20:13:55.082677 kernel: pnp 00:03: [dma 2] Oct 2 20:13:55.082696 kernel: pnp: PnP ACPI: found 5 devices Oct 2 20:13:55.082704 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 20:13:55.082712 kernel: NET: Registered PF_INET protocol family Oct 2 20:13:55.082721 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 20:13:55.082730 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 2 20:13:55.082738 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 20:13:55.082746 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 2 20:13:55.082755 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Oct 2 20:13:55.082774 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 2 20:13:55.082782 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 2 20:13:55.082790 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 2 20:13:55.082798 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 20:13:55.082807 kernel: NET: Registered PF_XDP protocol family Oct 2 20:13:55.082896 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 2 20:13:55.082992 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 2 20:13:55.083079 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 2 20:13:55.083157 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 2 20:13:55.083240 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Oct 2 20:13:55.083348 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Oct 2 20:13:55.083439 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 2 20:13:55.083527 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Oct 2 20:13:55.083541 kernel: PCI: CLS 0 bytes, default 64 Oct 2 20:13:55.083550 kernel: Initialise system trusted keyrings Oct 2 20:13:55.083559 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 2 20:13:55.086623 kernel: Key type asymmetric registered Oct 2 20:13:55.086635 kernel: Asymmetric key parser 'x509' registered Oct 2 20:13:55.086643 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 20:13:55.086651 kernel: io scheduler mq-deadline registered Oct 2 20:13:55.086659 kernel: io scheduler kyber registered Oct 2 20:13:55.086667 kernel: io scheduler bfq registered Oct 2 20:13:55.086676 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 20:13:55.086684 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Oct 2 20:13:55.086693 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 2 20:13:55.086701 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 2 20:13:55.086720 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 2 20:13:55.086728 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 20:13:55.086736 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 20:13:55.086744 kernel: random: crng init done Oct 2 20:13:55.086752 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 2 20:13:55.086760 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 2 20:13:55.086768 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 2 20:13:55.086912 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 2 20:13:55.086933 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 2 20:13:55.087013 kernel: rtc_cmos 00:04: registered as rtc0 Oct 2 20:13:55.087086 kernel: rtc_cmos 00:04: setting system clock to 2023-10-02T20:13:54 UTC (1696277634) Oct 2 20:13:55.087171 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 2 20:13:55.087184 kernel: NET: Registered PF_INET6 protocol family Oct 2 20:13:55.087192 kernel: Segment Routing with IPv6 Oct 2 20:13:55.087200 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 20:13:55.087208 kernel: NET: Registered PF_PACKET protocol family Oct 2 20:13:55.087217 kernel: Key type dns_resolver registered Oct 2 20:13:55.087229 kernel: IPI shorthand broadcast: enabled Oct 2 20:13:55.087238 kernel: sched_clock: Marking stable (699728732, 116922691)->(845403834, -28752411) Oct 2 20:13:55.087246 kernel: registered taskstats version 1 Oct 2 20:13:55.087254 kernel: Loading compiled-in X.509 certificates Oct 2 20:13:55.087262 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 20:13:55.087270 kernel: Key type .fscrypt registered Oct 2 20:13:55.087278 kernel: Key type fscrypt-provisioning registered Oct 2 20:13:55.087287 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 20:13:55.087299 kernel: ima: Allocated hash algorithm: sha1 Oct 2 20:13:55.087307 kernel: ima: No architecture policies found Oct 2 20:13:55.087315 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 20:13:55.087323 kernel: Write protecting the kernel read-only data: 28672k Oct 2 20:13:55.087331 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 20:13:55.087340 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 20:13:55.087348 kernel: Run /init as init process Oct 2 20:13:55.087356 kernel: with arguments: Oct 2 20:13:55.087364 kernel: /init Oct 2 20:13:55.087374 kernel: with environment: Oct 2 20:13:55.087381 kernel: HOME=/ Oct 2 20:13:55.087389 kernel: TERM=linux Oct 2 20:13:55.087397 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 20:13:55.087408 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 20:13:55.087419 systemd[1]: Detected virtualization kvm. Oct 2 20:13:55.087428 systemd[1]: Detected architecture x86-64. Oct 2 20:13:55.087437 systemd[1]: Running in initrd. Oct 2 20:13:55.087448 systemd[1]: No hostname configured, using default hostname. Oct 2 20:13:55.087456 systemd[1]: Hostname set to . Oct 2 20:13:55.087465 systemd[1]: Initializing machine ID from VM UUID. Oct 2 20:13:55.087474 systemd[1]: Queued start job for default target initrd.target. Oct 2 20:13:55.087482 systemd[1]: Started systemd-ask-password-console.path. Oct 2 20:13:55.087491 systemd[1]: Reached target cryptsetup.target. Oct 2 20:13:55.087500 systemd[1]: Reached target paths.target. Oct 2 20:13:55.087508 systemd[1]: Reached target slices.target. Oct 2 20:13:55.087518 systemd[1]: Reached target swap.target. Oct 2 20:13:55.087527 systemd[1]: Reached target timers.target. Oct 2 20:13:55.087536 systemd[1]: Listening on iscsid.socket. Oct 2 20:13:55.087544 systemd[1]: Listening on iscsiuio.socket. Oct 2 20:13:55.087553 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 20:13:55.087562 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 20:13:55.087634 systemd[1]: Listening on systemd-journald.socket. Oct 2 20:13:55.087644 systemd[1]: Listening on systemd-networkd.socket. Oct 2 20:13:55.087655 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 20:13:55.087664 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 20:13:55.087673 systemd[1]: Reached target sockets.target. Oct 2 20:13:55.087681 systemd[1]: Starting kmod-static-nodes.service... Oct 2 20:13:55.087698 systemd[1]: Finished network-cleanup.service. Oct 2 20:13:55.087708 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 20:13:55.087719 systemd[1]: Starting systemd-journald.service... Oct 2 20:13:55.087728 systemd[1]: Starting systemd-modules-load.service... Oct 2 20:13:55.087737 systemd[1]: Starting systemd-resolved.service... Oct 2 20:13:55.087745 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 20:13:55.087754 systemd[1]: Finished kmod-static-nodes.service. Oct 2 20:13:55.087763 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 20:13:55.087772 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 20:13:55.087785 systemd-journald[184]: Journal started Oct 2 20:13:55.087843 systemd-journald[184]: Runtime Journal (/run/log/journal/cb5b1a4487e74406a645196a11f0ad98) is 4.9M, max 39.5M, 34.5M free. Oct 2 20:13:55.039593 systemd-modules-load[185]: Inserted module 'overlay' Oct 2 20:13:55.116670 kernel: Bridge firewalling registered Oct 2 20:13:55.116709 systemd[1]: Started systemd-journald.service. Oct 2 20:13:55.116727 kernel: audit: type=1130 audit(1696277635.101:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.094828 systemd-modules-load[185]: Inserted module 'br_netfilter' Oct 2 20:13:55.120934 kernel: audit: type=1130 audit(1696277635.116:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.095199 systemd-resolved[186]: Positive Trust Anchors: Oct 2 20:13:55.124998 kernel: audit: type=1130 audit(1696277635.120:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.095212 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 20:13:55.130111 kernel: audit: type=1130 audit(1696277635.124:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.130127 kernel: SCSI subsystem initialized Oct 2 20:13:55.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.095247 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 20:13:55.097952 systemd-resolved[186]: Defaulting to hostname 'linux'. Oct 2 20:13:55.117231 systemd[1]: Started systemd-resolved.service. Oct 2 20:13:55.143683 kernel: audit: type=1130 audit(1696277635.139:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.121616 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 20:13:55.154747 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 20:13:55.154773 kernel: device-mapper: uevent: version 1.0.3 Oct 2 20:13:55.154785 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 20:13:55.125552 systemd[1]: Reached target nss-lookup.target. Oct 2 20:13:55.131228 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 20:13:55.133694 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 20:13:55.139314 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 20:13:55.151075 systemd-modules-load[185]: Inserted module 'dm_multipath' Oct 2 20:13:55.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.153390 systemd[1]: Finished systemd-modules-load.service. Oct 2 20:13:55.163010 kernel: audit: type=1130 audit(1696277635.156:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.157939 systemd[1]: Starting systemd-sysctl.service... Oct 2 20:13:55.169639 kernel: audit: type=1130 audit(1696277635.163:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.163919 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 20:13:55.165439 systemd[1]: Starting dracut-cmdline.service... Oct 2 20:13:55.171947 systemd[1]: Finished systemd-sysctl.service. Oct 2 20:13:55.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.176618 kernel: audit: type=1130 audit(1696277635.172:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.181154 dracut-cmdline[207]: dracut-dracut-053 Oct 2 20:13:55.184441 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 20:13:55.255644 kernel: Loading iSCSI transport class v2.0-870. Oct 2 20:13:55.269659 kernel: iscsi: registered transport (tcp) Oct 2 20:13:55.294897 kernel: iscsi: registered transport (qla4xxx) Oct 2 20:13:55.294959 kernel: QLogic iSCSI HBA Driver Oct 2 20:13:55.350663 systemd[1]: Finished dracut-cmdline.service. Oct 2 20:13:55.352235 systemd[1]: Starting dracut-pre-udev.service... Oct 2 20:13:55.363277 kernel: audit: type=1130 audit(1696277635.350:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.435744 kernel: raid6: sse2x4 gen() 12541 MB/s Oct 2 20:13:55.452681 kernel: raid6: sse2x4 xor() 4987 MB/s Oct 2 20:13:55.469677 kernel: raid6: sse2x2 gen() 14362 MB/s Oct 2 20:13:55.486683 kernel: raid6: sse2x2 xor() 8791 MB/s Oct 2 20:13:55.503645 kernel: raid6: sse2x1 gen() 11090 MB/s Oct 2 20:13:55.521508 kernel: raid6: sse2x1 xor() 6904 MB/s Oct 2 20:13:55.521620 kernel: raid6: using algorithm sse2x2 gen() 14362 MB/s Oct 2 20:13:55.521653 kernel: raid6: .... xor() 8791 MB/s, rmw enabled Oct 2 20:13:55.522282 kernel: raid6: using ssse3x2 recovery algorithm Oct 2 20:13:55.536666 kernel: xor: measuring software checksum speed Oct 2 20:13:55.539302 kernel: prefetch64-sse : 18465 MB/sec Oct 2 20:13:55.539359 kernel: generic_sse : 16751 MB/sec Oct 2 20:13:55.539386 kernel: xor: using function: prefetch64-sse (18465 MB/sec) Oct 2 20:13:55.653648 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 20:13:55.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.673124 systemd[1]: Finished dracut-pre-udev.service. Oct 2 20:13:55.675000 audit: BPF prog-id=7 op=LOAD Oct 2 20:13:55.676000 audit: BPF prog-id=8 op=LOAD Oct 2 20:13:55.678499 systemd[1]: Starting systemd-udevd.service... Oct 2 20:13:55.714913 systemd-udevd[386]: Using default interface naming scheme 'v252'. Oct 2 20:13:55.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.726729 systemd[1]: Started systemd-udevd.service. Oct 2 20:13:55.728297 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 20:13:55.756434 dracut-pre-trigger[389]: rd.md=0: removing MD RAID activation Oct 2 20:13:55.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.799840 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 20:13:55.802719 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 20:13:55.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:55.848370 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 20:13:55.916595 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Oct 2 20:13:55.947596 kernel: libata version 3.00 loaded. Oct 2 20:13:55.951992 kernel: ata_piix 0000:00:01.1: version 2.13 Oct 2 20:13:55.953790 kernel: scsi host0: ata_piix Oct 2 20:13:55.953955 kernel: scsi host1: ata_piix Oct 2 20:13:55.954081 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Oct 2 20:13:55.954096 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Oct 2 20:13:55.965599 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 2 20:13:55.965638 kernel: GPT:17805311 != 41943039 Oct 2 20:13:55.965650 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 2 20:13:55.965661 kernel: GPT:17805311 != 41943039 Oct 2 20:13:55.965671 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 2 20:13:55.965682 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 20:13:56.148639 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (430) Oct 2 20:13:56.166336 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 20:13:56.189602 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 20:13:56.201556 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 20:13:56.209455 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 20:13:56.210881 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 20:13:56.215257 systemd[1]: Starting disk-uuid.service... Oct 2 20:13:56.231834 disk-uuid[456]: Primary Header is updated. Oct 2 20:13:56.231834 disk-uuid[456]: Secondary Entries is updated. Oct 2 20:13:56.231834 disk-uuid[456]: Secondary Header is updated. Oct 2 20:13:56.243634 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 20:13:56.255617 kernel: GPT:disk_guids don't match. Oct 2 20:13:56.255680 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 2 20:13:56.255708 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 20:13:57.293650 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 20:13:57.293849 disk-uuid[457]: The operation has completed successfully. Oct 2 20:13:57.369028 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 20:13:57.370019 systemd[1]: Finished disk-uuid.service. Oct 2 20:13:57.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:57.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:57.393245 systemd[1]: Starting verity-setup.service... Oct 2 20:13:57.431637 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 20:13:57.613218 systemd[1]: Found device dev-mapper-usr.device. Oct 2 20:13:57.619699 systemd[1]: Mounting sysusr-usr.mount... Oct 2 20:13:57.625941 systemd[1]: Finished verity-setup.service. Oct 2 20:13:57.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:58.258649 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 20:13:58.259783 systemd[1]: Mounted sysusr-usr.mount. Oct 2 20:13:58.262447 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 20:13:58.264342 systemd[1]: Starting ignition-setup.service... Oct 2 20:13:58.269529 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 20:13:58.299163 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 20:13:58.299287 kernel: BTRFS info (device vda6): using free space tree Oct 2 20:13:58.299320 kernel: BTRFS info (device vda6): has skinny extents Oct 2 20:13:58.328957 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 20:13:58.350838 systemd[1]: Finished ignition-setup.service. Oct 2 20:13:58.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:58.355394 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 20:13:58.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:58.422963 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 20:13:58.425000 audit: BPF prog-id=9 op=LOAD Oct 2 20:13:58.426623 systemd[1]: Starting systemd-networkd.service... Oct 2 20:13:58.452402 systemd-networkd[627]: lo: Link UP Oct 2 20:13:58.453203 systemd-networkd[627]: lo: Gained carrier Oct 2 20:13:58.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:58.453845 systemd-networkd[627]: Enumeration completed Oct 2 20:13:58.454124 systemd-networkd[627]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 20:13:58.454247 systemd[1]: Started systemd-networkd.service. Oct 2 20:13:58.455178 systemd[1]: Reached target network.target. Oct 2 20:13:58.456523 systemd[1]: Starting iscsiuio.service... Oct 2 20:13:58.458083 systemd-networkd[627]: eth0: Link UP Oct 2 20:13:58.458088 systemd-networkd[627]: eth0: Gained carrier Oct 2 20:13:58.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:58.464072 systemd[1]: Started iscsiuio.service. Oct 2 20:13:58.466334 systemd[1]: Starting iscsid.service... Oct 2 20:13:58.469551 iscsid[632]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 20:13:58.469551 iscsid[632]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 20:13:58.469551 iscsid[632]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 20:13:58.469551 iscsid[632]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 20:13:58.469551 iscsid[632]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 20:13:58.469551 iscsid[632]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 20:13:58.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:58.472988 systemd[1]: Started iscsid.service. Oct 2 20:13:58.474479 systemd[1]: Starting dracut-initqueue.service... Oct 2 20:13:58.482679 systemd-networkd[627]: eth0: DHCPv4 address 172.24.4.201/24, gateway 172.24.4.1 acquired from 172.24.4.1 Oct 2 20:13:58.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:58.493412 systemd[1]: Finished dracut-initqueue.service. Oct 2 20:13:58.494106 systemd[1]: Reached target remote-fs-pre.target. Oct 2 20:13:58.494648 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 20:13:58.495215 systemd[1]: Reached target remote-fs.target. Oct 2 20:13:58.497978 systemd[1]: Starting dracut-pre-mount.service... Oct 2 20:13:58.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:58.507530 systemd[1]: Finished dracut-pre-mount.service. Oct 2 20:13:58.699021 ignition[566]: Ignition 2.14.0 Oct 2 20:13:58.699973 ignition[566]: Stage: fetch-offline Oct 2 20:13:58.700168 ignition[566]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:13:58.700220 ignition[566]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 20:13:58.702828 ignition[566]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 20:13:58.703072 ignition[566]: parsed url from cmdline: "" Oct 2 20:13:58.703082 ignition[566]: no config URL provided Oct 2 20:13:58.703095 ignition[566]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 20:13:58.705946 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 20:13:58.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:58.703114 ignition[566]: no config at "/usr/lib/ignition/user.ign" Oct 2 20:13:58.710006 systemd[1]: Starting ignition-fetch.service... Oct 2 20:13:58.703126 ignition[566]: failed to fetch config: resource requires networking Oct 2 20:13:58.704028 ignition[566]: Ignition finished successfully Oct 2 20:13:58.731673 ignition[650]: Ignition 2.14.0 Oct 2 20:13:58.733407 ignition[650]: Stage: fetch Oct 2 20:13:58.734898 ignition[650]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:13:58.736706 ignition[650]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 20:13:58.738930 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 20:13:58.740883 ignition[650]: parsed url from cmdline: "" Oct 2 20:13:58.741019 ignition[650]: no config URL provided Oct 2 20:13:58.742230 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 20:13:58.743920 ignition[650]: no config at "/usr/lib/ignition/user.ign" Oct 2 20:13:58.752764 ignition[650]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Oct 2 20:13:58.752940 ignition[650]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Oct 2 20:13:58.757205 ignition[650]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Oct 2 20:13:58.991863 ignition[650]: GET result: OK Oct 2 20:13:58.992034 ignition[650]: parsing config with SHA512: e73424b303f8fc207226748afa86b4be94ac5d6ffba63ec02dbdad1a3d5d567bf6add12ebb4adb64f41705b6fd52ada6b548778c9f89d846dd832d54d7e1c277 Oct 2 20:13:59.050336 unknown[650]: fetched base config from "system" Oct 2 20:13:59.051892 unknown[650]: fetched base config from "system" Oct 2 20:13:59.053246 unknown[650]: fetched user config from "openstack" Oct 2 20:13:59.055685 ignition[650]: fetch: fetch complete Oct 2 20:13:59.056757 ignition[650]: fetch: fetch passed Oct 2 20:13:59.056959 ignition[650]: Ignition finished successfully Oct 2 20:13:59.060537 systemd[1]: Finished ignition-fetch.service. Oct 2 20:13:59.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:59.067136 kernel: kauditd_printk_skb: 18 callbacks suppressed Oct 2 20:13:59.067206 kernel: audit: type=1130 audit(1696277639.062:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:59.065646 systemd[1]: Starting ignition-kargs.service... Oct 2 20:13:59.103373 ignition[656]: Ignition 2.14.0 Oct 2 20:13:59.103402 ignition[656]: Stage: kargs Oct 2 20:13:59.103698 ignition[656]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:13:59.103747 ignition[656]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 20:13:59.107123 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 20:13:59.109762 ignition[656]: kargs: kargs passed Oct 2 20:13:59.109879 ignition[656]: Ignition finished successfully Oct 2 20:13:59.112195 systemd[1]: Finished ignition-kargs.service. Oct 2 20:13:59.123034 kernel: audit: type=1130 audit(1696277639.112:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:59.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:59.115346 systemd[1]: Starting ignition-disks.service... Oct 2 20:13:59.135561 ignition[662]: Ignition 2.14.0 Oct 2 20:13:59.135628 ignition[662]: Stage: disks Oct 2 20:13:59.135887 ignition[662]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:13:59.135930 ignition[662]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 20:13:59.138224 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 20:13:59.140193 ignition[662]: disks: disks passed Oct 2 20:13:59.151481 kernel: audit: type=1130 audit(1696277639.141:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:59.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:59.141697 systemd[1]: Finished ignition-disks.service. Oct 2 20:13:59.140269 ignition[662]: Ignition finished successfully Oct 2 20:13:59.142371 systemd[1]: Reached target initrd-root-device.target. Oct 2 20:13:59.151929 systemd[1]: Reached target local-fs-pre.target. Oct 2 20:13:59.153479 systemd[1]: Reached target local-fs.target. Oct 2 20:13:59.155024 systemd[1]: Reached target sysinit.target. Oct 2 20:13:59.156626 systemd[1]: Reached target basic.target. Oct 2 20:13:59.159439 systemd[1]: Starting systemd-fsck-root.service... Oct 2 20:13:59.178381 systemd-fsck[670]: ROOT: clean, 603/1628000 files, 124049/1617920 blocks Oct 2 20:13:59.186945 systemd[1]: Finished systemd-fsck-root.service. Oct 2 20:13:59.188293 systemd[1]: Mounting sysroot.mount... Oct 2 20:13:59.198455 kernel: audit: type=1130 audit(1696277639.186:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:59.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:59.207708 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 20:13:59.207695 systemd[1]: Mounted sysroot.mount. Oct 2 20:13:59.208933 systemd[1]: Reached target initrd-root-fs.target. Oct 2 20:13:59.214953 systemd[1]: Mounting sysroot-usr.mount... Oct 2 20:13:59.218248 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 20:13:59.220965 systemd[1]: Starting flatcar-openstack-hostname.service... Oct 2 20:13:59.222259 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 20:13:59.222406 systemd[1]: Reached target ignition-diskful.target. Oct 2 20:13:59.227214 systemd[1]: Mounted sysroot-usr.mount. Oct 2 20:13:59.245942 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 20:13:59.251398 systemd[1]: Starting initrd-setup-root.service... Oct 2 20:13:59.265765 initrd-setup-root[682]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 20:13:59.288627 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (677) Oct 2 20:13:59.296134 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 20:13:59.296190 kernel: BTRFS info (device vda6): using free space tree Oct 2 20:13:59.296203 kernel: BTRFS info (device vda6): has skinny extents Oct 2 20:13:59.298807 initrd-setup-root[691]: cut: /sysroot/etc/group: No such file or directory Oct 2 20:13:59.306007 initrd-setup-root[714]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 20:13:59.321321 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 20:13:59.322655 initrd-setup-root[724]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 20:13:59.687818 systemd[1]: Finished initrd-setup-root.service. Oct 2 20:13:59.700525 kernel: audit: type=1130 audit(1696277639.688:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:59.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:59.692195 systemd[1]: Starting ignition-mount.service... Oct 2 20:13:59.701650 systemd[1]: Starting sysroot-boot.service... Oct 2 20:13:59.724511 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Oct 2 20:13:59.724810 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Oct 2 20:13:59.750984 ignition[746]: INFO : Ignition 2.14.0 Oct 2 20:13:59.750984 ignition[746]: INFO : Stage: mount Oct 2 20:13:59.753676 ignition[746]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:13:59.753676 ignition[746]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 20:13:59.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:59.767487 ignition[746]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 20:13:59.767487 ignition[746]: INFO : mount: mount passed Oct 2 20:13:59.767487 ignition[746]: INFO : Ignition finished successfully Oct 2 20:13:59.770755 kernel: audit: type=1130 audit(1696277639.761:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:59.759067 systemd[1]: Finished ignition-mount.service. Oct 2 20:13:59.783280 systemd[1]: Finished sysroot-boot.service. Oct 2 20:13:59.788293 kernel: audit: type=1130 audit(1696277639.783:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:59.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:59.789265 coreos-metadata[676]: Oct 02 20:13:59.789 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 2 20:13:59.807816 coreos-metadata[676]: Oct 02 20:13:59.807 INFO Fetch successful Oct 2 20:13:59.808521 coreos-metadata[676]: Oct 02 20:13:59.808 INFO wrote hostname ci-3510-3-0-0-9f20c2149e.novalocal to /sysroot/etc/hostname Oct 2 20:13:59.811732 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Oct 2 20:13:59.811857 systemd[1]: Finished flatcar-openstack-hostname.service. Oct 2 20:13:59.820609 kernel: audit: type=1130 audit(1696277639.812:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:59.820635 kernel: audit: type=1131 audit(1696277639.812:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:59.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:59.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:13:59.814059 systemd[1]: Starting ignition-files.service... Oct 2 20:13:59.824349 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 20:13:59.835598 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (755) Oct 2 20:13:59.839561 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 20:13:59.839603 kernel: BTRFS info (device vda6): using free space tree Oct 2 20:13:59.839614 kernel: BTRFS info (device vda6): has skinny extents Oct 2 20:13:59.896839 systemd-networkd[627]: eth0: Gained IPv6LL Oct 2 20:13:59.949494 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 20:13:59.973372 ignition[774]: INFO : Ignition 2.14.0 Oct 2 20:13:59.973372 ignition[774]: INFO : Stage: files Oct 2 20:13:59.976009 ignition[774]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:13:59.976009 ignition[774]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 20:13:59.981359 ignition[774]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 20:13:59.985857 ignition[774]: DEBUG : files: compiled without relabeling support, skipping Oct 2 20:13:59.989531 ignition[774]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 20:13:59.989531 ignition[774]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 20:14:00.068415 ignition[774]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 20:14:00.070662 ignition[774]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 20:14:00.099340 unknown[774]: wrote ssh authorized keys file for user: core Oct 2 20:14:00.101606 ignition[774]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 20:14:00.101606 ignition[774]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 20:14:00.101606 ignition[774]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Oct 2 20:14:00.493878 ignition[774]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 20:14:00.788707 ignition[774]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Oct 2 20:14:00.788707 ignition[774]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 20:14:00.788707 ignition[774]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz" Oct 2 20:14:00.788707 ignition[774]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz: attempt #1 Oct 2 20:14:00.964654 ignition[774]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 20:14:01.391626 ignition[774]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 961188117863ca9af5b084e84691e372efee93ad09daf6a0422e8d75a5803f394d8968064f7ca89f14e8973766201e731241f32538cf2c8d91f0233e786302df Oct 2 20:14:01.395421 ignition[774]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz" Oct 2 20:14:01.423650 ignition[774]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 20:14:01.425919 ignition[774]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubeadm: attempt #1 Oct 2 20:14:01.599346 ignition[774]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 20:14:03.061502 ignition[774]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 43b8f213f1732c092e34008d5334e6622a6603f7ec5890c395ac911d50069d0dc11a81fa38436df40fc875a10fee6ee13aa285c017f1de210171065e847c99c5 Oct 2 20:14:03.065365 ignition[774]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 20:14:03.065365 ignition[774]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 20:14:03.065365 ignition[774]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubelet: attempt #1 Oct 2 20:14:03.227451 ignition[774]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 20:14:05.994566 ignition[774]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 82b36a0b83a1d48ef1f70e3ed2a263b3ce935304cdc0606d194b290217fb04f98628b0d82e200b51ccf5c05c718b2476274ae710bb143fffe28dc6bbf8407d54 Oct 2 20:14:05.996249 ignition[774]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 20:14:05.996249 ignition[774]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 20:14:05.996249 ignition[774]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 20:14:05.996249 ignition[774]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 20:14:05.996249 ignition[774]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 20:14:05.996249 ignition[774]: INFO : files: op(9): [started] processing unit "coreos-metadata-sshkeys@.service" Oct 2 20:14:05.996249 ignition[774]: INFO : files: op(9): op(a): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Oct 2 20:14:06.002103 ignition[774]: INFO : files: op(9): op(a): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Oct 2 20:14:06.002103 ignition[774]: INFO : files: op(9): [finished] processing unit "coreos-metadata-sshkeys@.service" Oct 2 20:14:06.002103 ignition[774]: INFO : files: op(b): [started] processing unit "coreos-metadata.service" Oct 2 20:14:06.002103 ignition[774]: INFO : files: op(b): op(c): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Oct 2 20:14:06.002103 ignition[774]: INFO : files: op(b): op(c): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Oct 2 20:14:06.002103 ignition[774]: INFO : files: op(b): [finished] processing unit "coreos-metadata.service" Oct 2 20:14:06.002103 ignition[774]: INFO : files: op(d): [started] processing unit "prepare-cni-plugins.service" Oct 2 20:14:06.002103 ignition[774]: INFO : files: op(d): op(e): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 20:14:06.002103 ignition[774]: INFO : files: op(d): op(e): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 20:14:06.002103 ignition[774]: INFO : files: op(d): [finished] processing unit "prepare-cni-plugins.service" Oct 2 20:14:06.002103 ignition[774]: INFO : files: op(f): [started] processing unit "prepare-critools.service" Oct 2 20:14:06.002103 ignition[774]: INFO : files: op(f): op(10): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 20:14:06.002103 ignition[774]: INFO : files: op(f): op(10): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 20:14:06.002103 ignition[774]: INFO : files: op(f): [finished] processing unit "prepare-critools.service" Oct 2 20:14:06.002103 ignition[774]: INFO : files: op(11): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 20:14:06.002103 ignition[774]: INFO : files: op(11): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 20:14:06.002103 ignition[774]: INFO : files: op(12): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 20:14:06.041415 kernel: audit: type=1130 audit(1696277646.011:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.041442 kernel: audit: type=1130 audit(1696277646.028:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.041456 kernel: audit: type=1131 audit(1696277646.028:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.041468 kernel: audit: type=1130 audit(1696277646.036:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.041646 ignition[774]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 20:14:06.041646 ignition[774]: INFO : files: op(13): [started] setting preset to enabled for "prepare-critools.service" Oct 2 20:14:06.041646 ignition[774]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 20:14:06.041646 ignition[774]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 20:14:06.041646 ignition[774]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 20:14:06.041646 ignition[774]: INFO : files: files passed Oct 2 20:14:06.041646 ignition[774]: INFO : Ignition finished successfully Oct 2 20:14:06.008509 systemd[1]: Finished ignition-files.service. Oct 2 20:14:06.014051 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 20:14:06.048474 initrd-setup-root-after-ignition[800]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 20:14:06.021972 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 20:14:06.022778 systemd[1]: Starting ignition-quench.service... Oct 2 20:14:06.027428 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 20:14:06.027512 systemd[1]: Finished ignition-quench.service. Oct 2 20:14:06.033193 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 20:14:06.037164 systemd[1]: Reached target ignition-complete.target. Oct 2 20:14:06.042637 systemd[1]: Starting initrd-parse-etc.service... Oct 2 20:14:06.056469 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 20:14:06.057343 systemd[1]: Finished initrd-parse-etc.service. Oct 2 20:14:06.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.059749 systemd[1]: Reached target initrd-fs.target. Oct 2 20:14:06.067157 kernel: audit: type=1130 audit(1696277646.058:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.067184 kernel: audit: type=1131 audit(1696277646.059:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.065818 systemd[1]: Reached target initrd.target. Oct 2 20:14:06.066283 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 20:14:06.067008 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 20:14:06.079099 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 20:14:06.084617 kernel: audit: type=1130 audit(1696277646.079:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.084263 systemd[1]: Starting initrd-cleanup.service... Oct 2 20:14:06.094459 systemd[1]: Stopped target nss-lookup.target. Oct 2 20:14:06.095094 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 20:14:06.099770 systemd[1]: Stopped target timers.target. Oct 2 20:14:06.100702 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 20:14:06.105522 kernel: audit: type=1131 audit(1696277646.101:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.100858 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 20:14:06.101744 systemd[1]: Stopped target initrd.target. Oct 2 20:14:06.106101 systemd[1]: Stopped target basic.target. Oct 2 20:14:06.107010 systemd[1]: Stopped target ignition-complete.target. Oct 2 20:14:06.107847 systemd[1]: Stopped target ignition-diskful.target. Oct 2 20:14:06.108688 systemd[1]: Stopped target initrd-root-device.target. Oct 2 20:14:06.109565 systemd[1]: Stopped target remote-fs.target. Oct 2 20:14:06.110411 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 20:14:06.111244 systemd[1]: Stopped target sysinit.target. Oct 2 20:14:06.112088 systemd[1]: Stopped target local-fs.target. Oct 2 20:14:06.113003 systemd[1]: Stopped target local-fs-pre.target. Oct 2 20:14:06.114026 systemd[1]: Stopped target swap.target. Oct 2 20:14:06.119471 kernel: audit: type=1131 audit(1696277646.115:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.114818 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 20:14:06.114958 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 20:14:06.124662 kernel: audit: type=1131 audit(1696277646.120:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.115794 systemd[1]: Stopped target cryptsetup.target. Oct 2 20:14:06.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.119989 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 20:14:06.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.120133 systemd[1]: Stopped dracut-initqueue.service. Oct 2 20:14:06.121087 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 20:14:06.121242 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 20:14:06.125294 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 20:14:06.125442 systemd[1]: Stopped ignition-files.service. Oct 2 20:14:06.126981 systemd[1]: Stopping ignition-mount.service... Oct 2 20:14:06.128507 systemd[1]: Stopping iscsiuio.service... Oct 2 20:14:06.136952 systemd[1]: Stopping sysroot-boot.service... Oct 2 20:14:06.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.141289 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 20:14:06.141497 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 20:14:06.145864 ignition[813]: INFO : Ignition 2.14.0 Oct 2 20:14:06.145864 ignition[813]: INFO : Stage: umount Oct 2 20:14:06.145864 ignition[813]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:14:06.145864 ignition[813]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 2 20:14:06.145864 ignition[813]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 2 20:14:06.145864 ignition[813]: INFO : umount: umount passed Oct 2 20:14:06.145864 ignition[813]: INFO : Ignition finished successfully Oct 2 20:14:06.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.142201 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 20:14:06.142355 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 20:14:06.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.151169 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 20:14:06.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.151281 systemd[1]: Stopped iscsiuio.service. Oct 2 20:14:06.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.152856 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 20:14:06.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.152941 systemd[1]: Stopped ignition-mount.service. Oct 2 20:14:06.154883 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 20:14:06.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.154963 systemd[1]: Finished initrd-cleanup.service. Oct 2 20:14:06.157713 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 20:14:06.157757 systemd[1]: Stopped ignition-disks.service. Oct 2 20:14:06.158696 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 20:14:06.158733 systemd[1]: Stopped ignition-kargs.service. Oct 2 20:14:06.159699 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 2 20:14:06.159737 systemd[1]: Stopped ignition-fetch.service. Oct 2 20:14:06.160689 systemd[1]: Stopped target network.target. Oct 2 20:14:06.161799 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 20:14:06.161840 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 20:14:06.162879 systemd[1]: Stopped target paths.target. Oct 2 20:14:06.163837 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 20:14:06.167652 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 20:14:06.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.168893 systemd[1]: Stopped target slices.target. Oct 2 20:14:06.172397 systemd[1]: Stopped target sockets.target. Oct 2 20:14:06.173620 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 20:14:06.173646 systemd[1]: Closed iscsid.socket. Oct 2 20:14:06.174507 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 20:14:06.174541 systemd[1]: Closed iscsiuio.socket. Oct 2 20:14:06.175372 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 20:14:06.175421 systemd[1]: Stopped ignition-setup.service. Oct 2 20:14:06.176532 systemd[1]: Stopping systemd-networkd.service... Oct 2 20:14:06.177768 systemd[1]: Stopping systemd-resolved.service... Oct 2 20:14:06.179620 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 20:14:06.184623 systemd-networkd[627]: eth0: DHCPv6 lease lost Oct 2 20:14:06.186266 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 20:14:06.187620 systemd[1]: Stopped systemd-networkd.service. Oct 2 20:14:06.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.190465 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 20:14:06.191120 systemd[1]: Closed systemd-networkd.socket. Oct 2 20:14:06.190000 audit: BPF prog-id=9 op=UNLOAD Oct 2 20:14:06.193117 systemd[1]: Stopping network-cleanup.service... Oct 2 20:14:06.194263 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 20:14:06.195045 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 20:14:06.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.196213 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 20:14:06.196835 systemd[1]: Stopped systemd-sysctl.service. Oct 2 20:14:06.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.197964 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 20:14:06.198689 systemd[1]: Stopped systemd-modules-load.service. Oct 2 20:14:06.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.199324 systemd[1]: Stopping systemd-udevd.service... Oct 2 20:14:06.201277 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 20:14:06.201882 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 20:14:06.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.201997 systemd[1]: Stopped systemd-resolved.service. Oct 2 20:14:06.205342 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 20:14:06.205508 systemd[1]: Stopped systemd-udevd.service. Oct 2 20:14:06.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.207774 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 20:14:06.207905 systemd[1]: Stopped sysroot-boot.service. Oct 2 20:14:06.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.209000 audit: BPF prog-id=6 op=UNLOAD Oct 2 20:14:06.209913 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 20:14:06.209973 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 20:14:06.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.210509 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 20:14:06.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.210540 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 20:14:06.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.211328 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 20:14:06.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.211374 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 20:14:06.212447 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 20:14:06.212499 systemd[1]: Stopped dracut-cmdline.service. Oct 2 20:14:06.213457 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 20:14:06.213498 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 20:14:06.214535 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 20:14:06.214622 systemd[1]: Stopped initrd-setup-root.service. Oct 2 20:14:06.216196 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 20:14:06.223994 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 20:14:06.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.224066 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 20:14:06.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.225401 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 20:14:06.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.225456 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 20:14:06.226077 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 20:14:06.226119 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 20:14:06.228162 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 20:14:06.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.228645 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 20:14:06.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.228728 systemd[1]: Stopped network-cleanup.service. Oct 2 20:14:06.229856 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 20:14:06.229938 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 20:14:06.231025 systemd[1]: Reached target initrd-switch-root.target. Oct 2 20:14:06.232675 systemd[1]: Starting initrd-switch-root.service... Oct 2 20:14:06.252000 systemd[1]: Switching root. Oct 2 20:14:06.273549 iscsid[632]: iscsid shutting down. Oct 2 20:14:06.274795 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Oct 2 20:14:06.274896 systemd-journald[184]: Journal stopped Oct 2 20:14:10.379521 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 20:14:10.379628 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 20:14:10.379645 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 20:14:10.379657 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 20:14:10.379674 kernel: SELinux: policy capability open_perms=1 Oct 2 20:14:10.379686 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 20:14:10.379701 kernel: SELinux: policy capability always_check_network=0 Oct 2 20:14:10.379712 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 20:14:10.379746 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 20:14:10.379761 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 20:14:10.379773 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 20:14:10.379785 systemd[1]: Successfully loaded SELinux policy in 96.278ms. Oct 2 20:14:10.379804 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.099ms. Oct 2 20:14:10.379818 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 20:14:10.379831 systemd[1]: Detected virtualization kvm. Oct 2 20:14:10.379843 systemd[1]: Detected architecture x86-64. Oct 2 20:14:10.379855 systemd[1]: Detected first boot. Oct 2 20:14:10.379867 systemd[1]: Hostname set to . Oct 2 20:14:10.379882 systemd[1]: Initializing machine ID from VM UUID. Oct 2 20:14:10.379894 systemd[1]: Populated /etc with preset unit settings. Oct 2 20:14:10.379907 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 20:14:10.379920 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 20:14:10.379934 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 20:14:10.379950 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 20:14:10.379967 systemd[1]: Stopped iscsid.service. Oct 2 20:14:10.379982 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 20:14:10.379994 systemd[1]: Stopped initrd-switch-root.service. Oct 2 20:14:10.380006 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 20:14:10.380018 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 20:14:10.380031 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 20:14:10.380043 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 2 20:14:10.380055 systemd[1]: Created slice system-getty.slice. Oct 2 20:14:10.380067 systemd[1]: Created slice system-modprobe.slice. Oct 2 20:14:10.380082 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 20:14:10.380094 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 20:14:10.380106 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 20:14:10.380118 systemd[1]: Created slice user.slice. Oct 2 20:14:10.380130 systemd[1]: Started systemd-ask-password-console.path. Oct 2 20:14:10.380142 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 20:14:10.380153 systemd[1]: Set up automount boot.automount. Oct 2 20:14:10.380166 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 20:14:10.380181 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 20:14:10.380193 systemd[1]: Stopped target initrd-fs.target. Oct 2 20:14:10.380206 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 20:14:10.380218 systemd[1]: Reached target integritysetup.target. Oct 2 20:14:10.380230 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 20:14:10.380243 systemd[1]: Reached target remote-fs.target. Oct 2 20:14:10.380255 systemd[1]: Reached target slices.target. Oct 2 20:14:10.380267 systemd[1]: Reached target swap.target. Oct 2 20:14:10.380281 systemd[1]: Reached target torcx.target. Oct 2 20:14:10.380293 systemd[1]: Reached target veritysetup.target. Oct 2 20:14:10.380306 systemd[1]: Listening on systemd-coredump.socket. Oct 2 20:14:10.380318 systemd[1]: Listening on systemd-initctl.socket. Oct 2 20:14:10.380330 systemd[1]: Listening on systemd-networkd.socket. Oct 2 20:14:10.380342 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 20:14:10.380354 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 20:14:10.380366 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 20:14:10.380378 systemd[1]: Mounting dev-hugepages.mount... Oct 2 20:14:10.380392 systemd[1]: Mounting dev-mqueue.mount... Oct 2 20:14:10.380404 systemd[1]: Mounting media.mount... Oct 2 20:14:10.380417 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 20:14:10.380429 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 20:14:10.380441 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 20:14:10.380453 systemd[1]: Mounting tmp.mount... Oct 2 20:14:10.380465 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 20:14:10.380478 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 20:14:10.380490 systemd[1]: Starting kmod-static-nodes.service... Oct 2 20:14:10.380504 systemd[1]: Starting modprobe@configfs.service... Oct 2 20:14:10.380517 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 20:14:10.380529 systemd[1]: Starting modprobe@drm.service... Oct 2 20:14:10.380540 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 20:14:10.380552 systemd[1]: Starting modprobe@fuse.service... Oct 2 20:14:10.380564 systemd[1]: Starting modprobe@loop.service... Oct 2 20:14:10.380603 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 20:14:10.380616 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 20:14:10.380628 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 20:14:10.380642 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 20:14:10.380654 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 20:14:10.380667 systemd[1]: Stopped systemd-journald.service. Oct 2 20:14:10.380679 systemd[1]: Starting systemd-journald.service... Oct 2 20:14:10.380691 systemd[1]: Starting systemd-modules-load.service... Oct 2 20:14:10.380703 systemd[1]: Starting systemd-network-generator.service... Oct 2 20:14:10.380715 systemd[1]: Starting systemd-remount-fs.service... Oct 2 20:14:10.381628 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 20:14:10.381645 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 20:14:10.381662 systemd[1]: Stopped verity-setup.service. Oct 2 20:14:10.381675 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 20:14:10.381687 systemd[1]: Mounted dev-hugepages.mount. Oct 2 20:14:10.381700 systemd[1]: Mounted dev-mqueue.mount. Oct 2 20:14:10.381712 systemd[1]: Mounted media.mount. Oct 2 20:14:10.381724 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 20:14:10.381737 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 20:14:10.381749 systemd[1]: Mounted tmp.mount. Oct 2 20:14:10.381761 systemd[1]: Finished kmod-static-nodes.service. Oct 2 20:14:10.381773 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 20:14:10.381787 systemd[1]: Finished modprobe@configfs.service. Oct 2 20:14:10.381800 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 20:14:10.381813 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 20:14:10.381824 kernel: fuse: init (API version 7.34) Oct 2 20:14:10.381838 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 20:14:10.381851 systemd[1]: Finished modprobe@drm.service. Oct 2 20:14:10.381866 systemd-journald[911]: Journal started Oct 2 20:14:10.381911 systemd-journald[911]: Runtime Journal (/run/log/journal/cb5b1a4487e74406a645196a11f0ad98) is 4.9M, max 39.5M, 34.5M free. Oct 2 20:14:06.597000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 20:14:06.726000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 20:14:06.726000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 20:14:06.727000 audit: BPF prog-id=10 op=LOAD Oct 2 20:14:06.727000 audit: BPF prog-id=10 op=UNLOAD Oct 2 20:14:06.727000 audit: BPF prog-id=11 op=LOAD Oct 2 20:14:06.728000 audit: BPF prog-id=11 op=UNLOAD Oct 2 20:14:10.175000 audit: BPF prog-id=12 op=LOAD Oct 2 20:14:10.175000 audit: BPF prog-id=3 op=UNLOAD Oct 2 20:14:10.175000 audit: BPF prog-id=13 op=LOAD Oct 2 20:14:10.175000 audit: BPF prog-id=14 op=LOAD Oct 2 20:14:10.175000 audit: BPF prog-id=4 op=UNLOAD Oct 2 20:14:10.175000 audit: BPF prog-id=5 op=UNLOAD Oct 2 20:14:10.176000 audit: BPF prog-id=15 op=LOAD Oct 2 20:14:10.176000 audit: BPF prog-id=12 op=UNLOAD Oct 2 20:14:10.177000 audit: BPF prog-id=16 op=LOAD Oct 2 20:14:10.177000 audit: BPF prog-id=17 op=LOAD Oct 2 20:14:10.177000 audit: BPF prog-id=13 op=UNLOAD Oct 2 20:14:10.177000 audit: BPF prog-id=14 op=UNLOAD Oct 2 20:14:10.177000 audit: BPF prog-id=18 op=LOAD Oct 2 20:14:10.177000 audit: BPF prog-id=15 op=UNLOAD Oct 2 20:14:10.177000 audit: BPF prog-id=19 op=LOAD Oct 2 20:14:10.178000 audit: BPF prog-id=20 op=LOAD Oct 2 20:14:10.178000 audit: BPF prog-id=16 op=UNLOAD Oct 2 20:14:10.178000 audit: BPF prog-id=17 op=UNLOAD Oct 2 20:14:10.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.188000 audit: BPF prog-id=18 op=UNLOAD Oct 2 20:14:10.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.311000 audit: BPF prog-id=21 op=LOAD Oct 2 20:14:10.311000 audit: BPF prog-id=22 op=LOAD Oct 2 20:14:10.312000 audit: BPF prog-id=23 op=LOAD Oct 2 20:14:10.312000 audit: BPF prog-id=19 op=UNLOAD Oct 2 20:14:10.312000 audit: BPF prog-id=20 op=UNLOAD Oct 2 20:14:10.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.377000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 20:14:10.377000 audit[911]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffde0262e20 a2=4000 a3=7ffde0262ebc items=0 ppid=1 pid=911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:10.377000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 20:14:10.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.892796 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 20:14:10.173482 systemd[1]: Queued start job for default target multi-user.target. Oct 2 20:14:06.893736 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:06Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 20:14:10.173494 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 20:14:06.893759 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:06Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 20:14:10.384651 systemd[1]: Started systemd-journald.service. Oct 2 20:14:10.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.179274 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 20:14:06.893815 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:06Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 20:14:06.893829 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:06Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 20:14:06.893862 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:06Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 20:14:10.384987 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 20:14:06.893877 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:06Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 20:14:10.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.387638 kernel: loop: module loaded Oct 2 20:14:10.385196 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 20:14:06.894106 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:06Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 20:14:10.386152 systemd[1]: Finished systemd-modules-load.service. Oct 2 20:14:06.894150 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:06Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 20:14:10.387628 systemd[1]: Finished systemd-network-generator.service. Oct 2 20:14:06.894167 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:06Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 20:14:06.895075 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:06Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 20:14:10.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.895114 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:06Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 20:14:10.388414 systemd[1]: Finished systemd-remount-fs.service. Oct 2 20:14:06.895135 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:06Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 20:14:06.895153 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:06Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 20:14:10.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:06.895173 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:06Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 20:14:10.389525 systemd[1]: Reached target network-pre.target. Oct 2 20:14:06.895189 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:06Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 20:14:09.786079 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:09Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:14:09.787398 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:09Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:14:09.787560 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:09Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:14:09.787802 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:09Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:14:09.787872 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:09Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 20:14:09.787967 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2023-10-02T20:14:09Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 20:14:10.392716 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 20:14:10.393236 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 20:14:10.395310 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 20:14:10.396786 systemd[1]: Starting systemd-journal-flush.service... Oct 2 20:14:10.397342 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 20:14:10.398468 systemd[1]: Starting systemd-random-seed.service... Oct 2 20:14:10.401689 systemd[1]: Starting systemd-sysctl.service... Oct 2 20:14:10.404015 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 20:14:10.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.405377 systemd[1]: Finished modprobe@fuse.service. Oct 2 20:14:10.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.406212 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 20:14:10.406654 systemd[1]: Finished modprobe@loop.service. Oct 2 20:14:10.409185 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 20:14:10.419742 systemd-journald[911]: Time spent on flushing to /var/log/journal/cb5b1a4487e74406a645196a11f0ad98 is 38.039ms for 1114 entries. Oct 2 20:14:10.419742 systemd-journald[911]: System Journal (/var/log/journal/cb5b1a4487e74406a645196a11f0ad98) is 8.0M, max 584.8M, 576.8M free. Oct 2 20:14:10.499945 systemd-journald[911]: Received client request to flush runtime journal. Oct 2 20:14:10.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.412206 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 20:14:10.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.413800 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 20:14:10.425255 systemd[1]: Finished systemd-random-seed.service. Oct 2 20:14:10.501449 udevadm[949]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 2 20:14:10.425863 systemd[1]: Reached target first-boot-complete.target. Oct 2 20:14:10.426676 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 20:14:10.449861 systemd[1]: Finished systemd-sysctl.service. Oct 2 20:14:10.481794 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 20:14:10.483382 systemd[1]: Starting systemd-udev-settle.service... Oct 2 20:14:10.498326 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 20:14:10.500150 systemd[1]: Starting systemd-sysusers.service... Oct 2 20:14:10.501862 systemd[1]: Finished systemd-journal-flush.service. Oct 2 20:14:10.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.538556 systemd[1]: Finished systemd-sysusers.service. Oct 2 20:14:10.540049 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 20:14:10.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:10.580519 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 20:14:10.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:11.048050 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 20:14:11.062013 kernel: kauditd_printk_skb: 99 callbacks suppressed Oct 2 20:14:11.062163 kernel: audit: type=1130 audit(1696277651.048:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:11.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:11.051000 audit: BPF prog-id=24 op=LOAD Oct 2 20:14:11.062883 systemd[1]: Starting systemd-udevd.service... Oct 2 20:14:11.065499 kernel: audit: type=1334 audit(1696277651.051:146): prog-id=24 op=LOAD Oct 2 20:14:11.065608 kernel: audit: type=1334 audit(1696277651.061:147): prog-id=25 op=LOAD Oct 2 20:14:11.065670 kernel: audit: type=1334 audit(1696277651.061:148): prog-id=7 op=UNLOAD Oct 2 20:14:11.065712 kernel: audit: type=1334 audit(1696277651.061:149): prog-id=8 op=UNLOAD Oct 2 20:14:11.061000 audit: BPF prog-id=25 op=LOAD Oct 2 20:14:11.061000 audit: BPF prog-id=7 op=UNLOAD Oct 2 20:14:11.061000 audit: BPF prog-id=8 op=UNLOAD Oct 2 20:14:11.100725 systemd-udevd[958]: Using default interface naming scheme 'v252'. Oct 2 20:14:11.140630 systemd[1]: Started systemd-udevd.service. Oct 2 20:14:11.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:11.155613 kernel: audit: type=1130 audit(1696277651.145:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:11.162000 audit: BPF prog-id=26 op=LOAD Oct 2 20:14:11.166627 kernel: audit: type=1334 audit(1696277651.162:151): prog-id=26 op=LOAD Oct 2 20:14:11.169373 systemd[1]: Starting systemd-networkd.service... Oct 2 20:14:11.179000 audit: BPF prog-id=27 op=LOAD Oct 2 20:14:11.180000 audit: BPF prog-id=28 op=LOAD Oct 2 20:14:11.188909 kernel: audit: type=1334 audit(1696277651.179:152): prog-id=27 op=LOAD Oct 2 20:14:11.188958 kernel: audit: type=1334 audit(1696277651.180:153): prog-id=28 op=LOAD Oct 2 20:14:11.189478 systemd[1]: Starting systemd-userdbd.service... Oct 2 20:14:11.180000 audit: BPF prog-id=29 op=LOAD Oct 2 20:14:11.194617 kernel: audit: type=1334 audit(1696277651.180:154): prog-id=29 op=LOAD Oct 2 20:14:11.202483 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 20:14:11.304603 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 2 20:14:11.310605 kernel: ACPI: button: Power Button [PWRF] Oct 2 20:14:11.317783 systemd[1]: Started systemd-userdbd.service. Oct 2 20:14:11.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:11.347000 audit[973]: AVC avc: denied { confidentiality } for pid=973 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 20:14:11.347000 audit[973]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=561b769630e0 a1=32194 a2=7fd313a4cbc5 a3=5 items=106 ppid=958 pid=973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:11.347000 audit: CWD cwd="/" Oct 2 20:14:11.347000 audit: PATH item=0 name=(null) inode=14018 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=1 name=(null) inode=14019 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=2 name=(null) inode=14018 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=3 name=(null) inode=14020 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=4 name=(null) inode=14018 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=5 name=(null) inode=14021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=6 name=(null) inode=14021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=7 name=(null) inode=14022 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=8 name=(null) inode=14021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=9 name=(null) inode=14023 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=10 name=(null) inode=14021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=11 name=(null) inode=14024 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=12 name=(null) inode=14021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=13 name=(null) inode=14025 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=14 name=(null) inode=14021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=15 name=(null) inode=14026 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=16 name=(null) inode=14018 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=17 name=(null) inode=14027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=18 name=(null) inode=14027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=19 name=(null) inode=14028 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=20 name=(null) inode=14027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=21 name=(null) inode=14029 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=22 name=(null) inode=14027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=23 name=(null) inode=14030 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=24 name=(null) inode=14027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=25 name=(null) inode=14031 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=26 name=(null) inode=14027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=27 name=(null) inode=14032 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=28 name=(null) inode=14018 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=29 name=(null) inode=14033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=30 name=(null) inode=14033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=31 name=(null) inode=14034 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=32 name=(null) inode=14033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=33 name=(null) inode=14035 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=34 name=(null) inode=14033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=35 name=(null) inode=14036 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=36 name=(null) inode=14033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=37 name=(null) inode=14037 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=38 name=(null) inode=14033 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=39 name=(null) inode=14038 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=40 name=(null) inode=14018 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=41 name=(null) inode=14039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=42 name=(null) inode=14039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=43 name=(null) inode=14040 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=44 name=(null) inode=14039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=45 name=(null) inode=14041 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=46 name=(null) inode=14039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=47 name=(null) inode=14042 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=48 name=(null) inode=14039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=49 name=(null) inode=14043 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=50 name=(null) inode=14039 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=51 name=(null) inode=14044 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=52 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=53 name=(null) inode=14045 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=54 name=(null) inode=14045 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=55 name=(null) inode=14046 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=56 name=(null) inode=14045 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=57 name=(null) inode=14047 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=58 name=(null) inode=14045 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=59 name=(null) inode=14048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=60 name=(null) inode=14048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=61 name=(null) inode=14049 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=62 name=(null) inode=14048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=63 name=(null) inode=14050 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=64 name=(null) inode=14048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=65 name=(null) inode=14051 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=66 name=(null) inode=14048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=67 name=(null) inode=14052 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=68 name=(null) inode=14048 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=69 name=(null) inode=14053 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=70 name=(null) inode=14045 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=71 name=(null) inode=14054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=72 name=(null) inode=14054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=73 name=(null) inode=14055 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=74 name=(null) inode=14054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=75 name=(null) inode=14056 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=76 name=(null) inode=14054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=77 name=(null) inode=14057 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=78 name=(null) inode=14054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=79 name=(null) inode=14058 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=80 name=(null) inode=14054 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=81 name=(null) inode=14059 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=82 name=(null) inode=14045 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=83 name=(null) inode=14060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=84 name=(null) inode=14060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=85 name=(null) inode=14061 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=86 name=(null) inode=14060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=87 name=(null) inode=14062 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=88 name=(null) inode=14060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=89 name=(null) inode=14063 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=90 name=(null) inode=14060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=91 name=(null) inode=14064 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=92 name=(null) inode=14060 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=93 name=(null) inode=14065 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=94 name=(null) inode=14045 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=95 name=(null) inode=14066 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=96 name=(null) inode=14066 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=97 name=(null) inode=14067 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=98 name=(null) inode=14066 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=99 name=(null) inode=14068 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=100 name=(null) inode=14066 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=101 name=(null) inode=14069 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=102 name=(null) inode=14066 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=103 name=(null) inode=14070 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=104 name=(null) inode=14066 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PATH item=105 name=(null) inode=14071 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:14:11.347000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 20:14:11.410615 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Oct 2 20:14:11.437611 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 2 20:14:11.444656 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 20:14:11.447981 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 20:14:11.468938 systemd[1]: Finished systemd-udev-settle.service. Oct 2 20:14:11.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:11.470778 systemd[1]: Starting lvm2-activation-early.service... Oct 2 20:14:11.501816 lvm[987]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 20:14:11.502768 systemd-networkd[974]: lo: Link UP Oct 2 20:14:11.502780 systemd-networkd[974]: lo: Gained carrier Oct 2 20:14:11.503230 systemd-networkd[974]: Enumeration completed Oct 2 20:14:11.503333 systemd[1]: Started systemd-networkd.service. Oct 2 20:14:11.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:11.506749 systemd-networkd[974]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 20:14:11.508796 systemd-networkd[974]: eth0: Link UP Oct 2 20:14:11.508873 systemd-networkd[974]: eth0: Gained carrier Oct 2 20:14:11.520690 systemd-networkd[974]: eth0: DHCPv4 address 172.24.4.201/24, gateway 172.24.4.1 acquired from 172.24.4.1 Oct 2 20:14:11.530584 systemd[1]: Finished lvm2-activation-early.service. Oct 2 20:14:11.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:11.531216 systemd[1]: Reached target cryptsetup.target. Oct 2 20:14:11.532813 systemd[1]: Starting lvm2-activation.service... Oct 2 20:14:11.540049 lvm[988]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 20:14:11.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:11.573985 systemd[1]: Finished lvm2-activation.service. Oct 2 20:14:11.574608 systemd[1]: Reached target local-fs-pre.target. Oct 2 20:14:11.575078 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 20:14:11.575100 systemd[1]: Reached target local-fs.target. Oct 2 20:14:11.575528 systemd[1]: Reached target machines.target. Oct 2 20:14:11.577122 systemd[1]: Starting ldconfig.service... Oct 2 20:14:11.578642 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 20:14:11.578692 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 20:14:11.579850 systemd[1]: Starting systemd-boot-update.service... Oct 2 20:14:11.581377 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 20:14:11.583830 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 20:14:11.585646 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 20:14:11.585691 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 20:14:11.587271 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 20:14:11.608586 systemd[1]: boot.automount: Got automount request for /boot, triggered by 990 (bootctl) Oct 2 20:14:11.610052 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 20:14:11.652239 systemd-tmpfiles[993]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 20:14:11.663392 systemd-tmpfiles[993]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 20:14:11.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:11.664363 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 20:14:11.688173 systemd-tmpfiles[993]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 20:14:12.282627 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 20:14:12.284248 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 20:14:12.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:12.419845 systemd-fsck[998]: fsck.fat 4.2 (2021-01-31) Oct 2 20:14:12.419845 systemd-fsck[998]: /dev/vda1: 789 files, 115069/258078 clusters Oct 2 20:14:12.422524 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 20:14:12.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:12.427745 systemd[1]: Mounting boot.mount... Oct 2 20:14:12.462079 systemd[1]: Mounted boot.mount. Oct 2 20:14:12.499627 systemd[1]: Finished systemd-boot-update.service. Oct 2 20:14:12.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:12.581791 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 20:14:12.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:12.584396 systemd[1]: Starting audit-rules.service... Oct 2 20:14:12.588815 systemd[1]: Starting clean-ca-certificates.service... Oct 2 20:14:12.593816 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 20:14:12.601000 audit: BPF prog-id=30 op=LOAD Oct 2 20:14:12.608114 systemd[1]: Starting systemd-resolved.service... Oct 2 20:14:12.611000 audit: BPF prog-id=31 op=LOAD Oct 2 20:14:12.614292 systemd[1]: Starting systemd-timesyncd.service... Oct 2 20:14:12.617077 systemd[1]: Starting systemd-update-utmp.service... Oct 2 20:14:12.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:12.627228 systemd[1]: Finished clean-ca-certificates.service. Oct 2 20:14:12.627850 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 20:14:12.632000 audit[1012]: SYSTEM_BOOT pid=1012 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 20:14:12.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:12.635266 systemd[1]: Finished systemd-update-utmp.service. Oct 2 20:14:12.690438 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 20:14:12.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:12.691000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 20:14:12.691000 audit[1021]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd7162f5b0 a2=420 a3=0 items=0 ppid=1001 pid=1021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:12.691000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 20:14:12.692530 augenrules[1021]: No rules Oct 2 20:14:12.693120 systemd[1]: Finished audit-rules.service. Oct 2 20:14:12.706782 systemd-resolved[1010]: Positive Trust Anchors: Oct 2 20:14:12.707091 systemd-resolved[1010]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 20:14:12.707184 systemd-resolved[1010]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 20:14:12.727540 systemd-resolved[1010]: Using system hostname 'ci-3510-3-0-0-9f20c2149e.novalocal'. Oct 2 20:14:12.730151 systemd[1]: Started systemd-resolved.service. Oct 2 20:14:12.730720 systemd[1]: Reached target network.target. Oct 2 20:14:12.731121 systemd[1]: Reached target nss-lookup.target. Oct 2 20:14:12.743120 systemd[1]: Started systemd-timesyncd.service. Oct 2 20:14:12.743681 systemd[1]: Reached target time-set.target. Oct 2 20:14:12.782805 systemd-timesyncd[1011]: Contacted time server 95.81.173.155:123 (0.flatcar.pool.ntp.org). Oct 2 20:14:12.783631 systemd-timesyncd[1011]: Initial clock synchronization to Mon 2023-10-02 20:14:12.679067 UTC. Oct 2 20:14:13.011638 ldconfig[989]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 20:14:13.024552 systemd[1]: Finished ldconfig.service. Oct 2 20:14:13.028805 systemd[1]: Starting systemd-update-done.service... Oct 2 20:14:13.043337 systemd[1]: Finished systemd-update-done.service. Oct 2 20:14:13.044691 systemd[1]: Reached target sysinit.target. Oct 2 20:14:13.045991 systemd[1]: Started motdgen.path. Oct 2 20:14:13.047058 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 20:14:13.048752 systemd[1]: Started logrotate.timer. Oct 2 20:14:13.050013 systemd[1]: Started mdadm.timer. Oct 2 20:14:13.051008 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 20:14:13.052093 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 20:14:13.052159 systemd[1]: Reached target paths.target. Oct 2 20:14:13.053200 systemd[1]: Reached target timers.target. Oct 2 20:14:13.054795 systemd[1]: Listening on dbus.socket. Oct 2 20:14:13.057930 systemd[1]: Starting docker.socket... Oct 2 20:14:13.064807 systemd[1]: Listening on sshd.socket. Oct 2 20:14:13.066324 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 20:14:13.067422 systemd[1]: Listening on docker.socket. Oct 2 20:14:13.068866 systemd[1]: Reached target sockets.target. Oct 2 20:14:13.070110 systemd[1]: Reached target basic.target. Oct 2 20:14:13.071429 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 20:14:13.071719 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 20:14:13.074071 systemd[1]: Starting containerd.service... Oct 2 20:14:13.077974 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 2 20:14:13.081886 systemd[1]: Starting dbus.service... Oct 2 20:14:13.087235 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 20:14:13.095919 systemd[1]: Starting extend-filesystems.service... Oct 2 20:14:13.099851 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 20:14:13.102479 systemd[1]: Starting motdgen.service... Oct 2 20:14:13.108627 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 20:14:13.113089 systemd[1]: Starting prepare-critools.service... Oct 2 20:14:13.116745 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 20:14:13.119255 systemd[1]: Starting sshd-keygen.service... Oct 2 20:14:13.123324 systemd[1]: Starting systemd-logind.service... Oct 2 20:14:13.123824 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 20:14:13.123886 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 20:14:13.124329 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 20:14:13.154799 jq[1035]: false Oct 2 20:14:13.154963 tar[1047]: crictl Oct 2 20:14:13.127722 systemd[1]: Starting update-engine.service... Oct 2 20:14:13.159313 jq[1044]: true Oct 2 20:14:13.129216 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 20:14:13.140937 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 20:14:13.141089 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 20:14:13.156211 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 20:14:13.165333 tar[1046]: ./ Oct 2 20:14:13.165333 tar[1046]: ./macvlan Oct 2 20:14:13.156362 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 20:14:13.174809 jq[1058]: true Oct 2 20:14:13.207285 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 20:14:13.207457 systemd[1]: Finished motdgen.service. Oct 2 20:14:13.213003 extend-filesystems[1036]: Found vda Oct 2 20:14:13.213739 extend-filesystems[1036]: Found vda1 Oct 2 20:14:13.213739 extend-filesystems[1036]: Found vda2 Oct 2 20:14:13.213739 extend-filesystems[1036]: Found vda3 Oct 2 20:14:13.213739 extend-filesystems[1036]: Found usr Oct 2 20:14:13.213739 extend-filesystems[1036]: Found vda4 Oct 2 20:14:13.213739 extend-filesystems[1036]: Found vda6 Oct 2 20:14:13.213739 extend-filesystems[1036]: Found vda7 Oct 2 20:14:13.213739 extend-filesystems[1036]: Found vda9 Oct 2 20:14:13.213739 extend-filesystems[1036]: Checking size of /dev/vda9 Oct 2 20:14:13.247685 dbus-daemon[1032]: [system] SELinux support is enabled Oct 2 20:14:13.247856 systemd[1]: Started dbus.service. Oct 2 20:14:13.254165 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 20:14:13.254189 systemd[1]: Reached target system-config.target. Oct 2 20:14:13.256764 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 20:14:13.256783 systemd[1]: Reached target user-config.target. Oct 2 20:14:13.275328 extend-filesystems[1036]: Resized partition /dev/vda9 Oct 2 20:14:13.282507 extend-filesystems[1087]: resize2fs 1.46.5 (30-Dec-2021) Oct 2 20:14:13.317750 update_engine[1043]: I1002 20:14:13.313865 1043 main.cc:92] Flatcar Update Engine starting Oct 2 20:14:13.325587 env[1055]: time="2023-10-02T20:14:13.324381421Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 20:14:13.330589 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Oct 2 20:14:13.333168 systemd[1]: Started update-engine.service. Oct 2 20:14:13.338308 bash[1083]: Updated "/home/core/.ssh/authorized_keys" Oct 2 20:14:13.338405 update_engine[1043]: I1002 20:14:13.333632 1043 update_check_scheduler.cc:74] Next update check in 9m30s Oct 2 20:14:13.336215 systemd[1]: Started locksmithd.service. Oct 2 20:14:13.337681 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 20:14:13.352326 systemd-logind[1042]: Watching system buttons on /dev/input/event1 (Power Button) Oct 2 20:14:13.352725 systemd-logind[1042]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 20:14:13.352854 coreos-metadata[1031]: Oct 02 20:14:13.352 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Oct 2 20:14:13.355041 systemd-logind[1042]: New seat seat0. Oct 2 20:14:13.360957 systemd[1]: Started systemd-logind.service. Oct 2 20:14:13.372344 coreos-metadata[1031]: Oct 02 20:14:13.372 INFO Fetch successful Oct 2 20:14:13.372344 coreos-metadata[1031]: Oct 02 20:14:13.372 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 2 20:14:13.388586 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Oct 2 20:14:13.389925 coreos-metadata[1031]: Oct 02 20:14:13.389 INFO Fetch successful Oct 2 20:14:13.445669 env[1055]: time="2023-10-02T20:14:13.409831304Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 20:14:13.448729 extend-filesystems[1087]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 2 20:14:13.448729 extend-filesystems[1087]: old_desc_blocks = 1, new_desc_blocks = 3 Oct 2 20:14:13.448729 extend-filesystems[1087]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Oct 2 20:14:13.456971 extend-filesystems[1036]: Resized filesystem in /dev/vda9 Oct 2 20:14:13.449243 unknown[1031]: wrote ssh authorized keys file for user: core Oct 2 20:14:13.457977 env[1055]: time="2023-10-02T20:14:13.449622556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:14:13.457977 env[1055]: time="2023-10-02T20:14:13.453470801Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 20:14:13.457977 env[1055]: time="2023-10-02T20:14:13.453518743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:14:13.457977 env[1055]: time="2023-10-02T20:14:13.453851502Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 20:14:13.457977 env[1055]: time="2023-10-02T20:14:13.453891374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 20:14:13.457977 env[1055]: time="2023-10-02T20:14:13.453908492Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 20:14:13.457977 env[1055]: time="2023-10-02T20:14:13.453920409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 20:14:13.457977 env[1055]: time="2023-10-02T20:14:13.454028189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:14:13.457977 env[1055]: time="2023-10-02T20:14:13.454383623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:14:13.457977 env[1055]: time="2023-10-02T20:14:13.454541928Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 20:14:13.449635 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 20:14:13.458430 env[1055]: time="2023-10-02T20:14:13.454597316Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 20:14:13.458430 env[1055]: time="2023-10-02T20:14:13.454691955Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 20:14:13.458430 env[1055]: time="2023-10-02T20:14:13.454710299Z" level=info msg="metadata content store policy set" policy=shared Oct 2 20:14:13.449806 systemd[1]: Finished extend-filesystems.service. Oct 2 20:14:13.460431 tar[1046]: ./static Oct 2 20:14:13.464749 systemd-networkd[974]: eth0: Gained IPv6LL Oct 2 20:14:13.474730 update-ssh-keys[1094]: Updated "/home/core/.ssh/authorized_keys" Oct 2 20:14:13.475389 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 2 20:14:13.477067 env[1055]: time="2023-10-02T20:14:13.477027206Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 20:14:13.477114 env[1055]: time="2023-10-02T20:14:13.477071153Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 20:14:13.477145 env[1055]: time="2023-10-02T20:14:13.477122685Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 20:14:13.477212 env[1055]: time="2023-10-02T20:14:13.477186964Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 20:14:13.477263 env[1055]: time="2023-10-02T20:14:13.477218362Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 20:14:13.477263 env[1055]: time="2023-10-02T20:14:13.477235381Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 20:14:13.477325 env[1055]: time="2023-10-02T20:14:13.477267906Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 20:14:13.477325 env[1055]: time="2023-10-02T20:14:13.477290266Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 20:14:13.477325 env[1055]: time="2023-10-02T20:14:13.477305860Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 20:14:13.477325 env[1055]: time="2023-10-02T20:14:13.477320477Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 20:14:13.477416 env[1055]: time="2023-10-02T20:14:13.477352141Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 20:14:13.477416 env[1055]: time="2023-10-02T20:14:13.477368844Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 20:14:13.477539 env[1055]: time="2023-10-02T20:14:13.477495404Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 20:14:13.477659 env[1055]: time="2023-10-02T20:14:13.477634138Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 20:14:13.479509 env[1055]: time="2023-10-02T20:14:13.478017548Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 20:14:13.479509 env[1055]: time="2023-10-02T20:14:13.478069110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 20:14:13.479509 env[1055]: time="2023-10-02T20:14:13.478087721Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 20:14:13.479509 env[1055]: time="2023-10-02T20:14:13.478137156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 20:14:13.479509 env[1055]: time="2023-10-02T20:14:13.478154165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 20:14:13.479509 env[1055]: time="2023-10-02T20:14:13.478168979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 20:14:13.479509 env[1055]: time="2023-10-02T20:14:13.478182448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 20:14:13.479509 env[1055]: time="2023-10-02T20:14:13.478197618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 20:14:13.479509 env[1055]: time="2023-10-02T20:14:13.478212432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 20:14:13.479509 env[1055]: time="2023-10-02T20:14:13.478226365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 20:14:13.479509 env[1055]: time="2023-10-02T20:14:13.478240853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 20:14:13.479509 env[1055]: time="2023-10-02T20:14:13.478257734Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 20:14:13.479509 env[1055]: time="2023-10-02T20:14:13.478430793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 20:14:13.479509 env[1055]: time="2023-10-02T20:14:13.478452024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 20:14:13.479509 env[1055]: time="2023-10-02T20:14:13.478469775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 20:14:13.480012 env[1055]: time="2023-10-02T20:14:13.478486666Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 20:14:13.480012 env[1055]: time="2023-10-02T20:14:13.478506098Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 20:14:13.480012 env[1055]: time="2023-10-02T20:14:13.478521307Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 20:14:13.480012 env[1055]: time="2023-10-02T20:14:13.478540561Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 20:14:13.480012 env[1055]: time="2023-10-02T20:14:13.478601775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 20:14:13.480141 env[1055]: time="2023-10-02T20:14:13.478839537Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 20:14:13.480141 env[1055]: time="2023-10-02T20:14:13.478908474Z" level=info msg="Connect containerd service" Oct 2 20:14:13.480141 env[1055]: time="2023-10-02T20:14:13.478945212Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 20:14:13.484794 env[1055]: time="2023-10-02T20:14:13.480465747Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 20:14:13.484794 env[1055]: time="2023-10-02T20:14:13.481128187Z" level=info msg="Start subscribing containerd event" Oct 2 20:14:13.484794 env[1055]: time="2023-10-02T20:14:13.481169573Z" level=info msg="Start recovering state" Oct 2 20:14:13.484794 env[1055]: time="2023-10-02T20:14:13.481226939Z" level=info msg="Start event monitor" Oct 2 20:14:13.484794 env[1055]: time="2023-10-02T20:14:13.481327511Z" level=info msg="Start snapshots syncer" Oct 2 20:14:13.484794 env[1055]: time="2023-10-02T20:14:13.481337984Z" level=info msg="Start cni network conf syncer for default" Oct 2 20:14:13.484794 env[1055]: time="2023-10-02T20:14:13.481345875Z" level=info msg="Start streaming server" Oct 2 20:14:13.484794 env[1055]: time="2023-10-02T20:14:13.481679671Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 20:14:13.484794 env[1055]: time="2023-10-02T20:14:13.481720265Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 20:14:13.481834 systemd[1]: Started containerd.service. Oct 2 20:14:13.491911 env[1055]: time="2023-10-02T20:14:13.491836950Z" level=info msg="containerd successfully booted in 0.195405s" Oct 2 20:14:13.528103 tar[1046]: ./vlan Oct 2 20:14:13.590699 systemd[1]: Created slice system-sshd.slice. Oct 2 20:14:13.608983 tar[1046]: ./portmap Oct 2 20:14:13.678620 tar[1046]: ./host-local Oct 2 20:14:13.731119 systemd[1]: Finished prepare-critools.service. Oct 2 20:14:13.740809 tar[1046]: ./vrf Oct 2 20:14:13.773674 tar[1046]: ./bridge Oct 2 20:14:13.813615 tar[1046]: ./tuning Oct 2 20:14:13.845490 tar[1046]: ./firewall Oct 2 20:14:13.887241 tar[1046]: ./host-device Oct 2 20:14:13.923606 tar[1046]: ./sbr Oct 2 20:14:13.956518 tar[1046]: ./loopback Oct 2 20:14:13.987820 tar[1046]: ./dhcp Oct 2 20:14:14.079185 tar[1046]: ./ptp Oct 2 20:14:14.118027 tar[1046]: ./ipvlan Oct 2 20:14:14.156178 tar[1046]: ./bandwidth Oct 2 20:14:14.204372 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 20:14:14.253321 locksmithd[1091]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 20:14:15.568246 sshd_keygen[1061]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 20:14:15.598780 systemd[1]: Finished sshd-keygen.service. Oct 2 20:14:15.601106 systemd[1]: Starting issuegen.service... Oct 2 20:14:15.602732 systemd[1]: Started sshd@0-172.24.4.201:22-172.24.4.1:49040.service. Oct 2 20:14:15.617979 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 20:14:15.618131 systemd[1]: Finished issuegen.service. Oct 2 20:14:15.620138 systemd[1]: Starting systemd-user-sessions.service... Oct 2 20:14:15.628946 systemd[1]: Finished systemd-user-sessions.service. Oct 2 20:14:15.630944 systemd[1]: Started getty@tty1.service. Oct 2 20:14:15.632776 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 20:14:15.633428 systemd[1]: Reached target getty.target. Oct 2 20:14:15.633995 systemd[1]: Reached target multi-user.target. Oct 2 20:14:15.636051 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 20:14:15.644093 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 20:14:15.644246 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 20:14:15.644865 systemd[1]: Startup finished in 1.005s (kernel) + 11.656s (initrd) + 9.172s (userspace) = 21.835s. Oct 2 20:14:17.023085 sshd[1115]: Accepted publickey for core from 172.24.4.1 port 49040 ssh2: RSA SHA256:q9+Ye9PJtNeEYaEmKUiAJUY+7d4xsgDWJfPGYyGwvrE Oct 2 20:14:17.027381 sshd[1115]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:14:17.055314 systemd-logind[1042]: New session 1 of user core. Oct 2 20:14:17.058145 systemd[1]: Created slice user-500.slice. Oct 2 20:14:17.061023 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 20:14:17.079073 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 20:14:17.081674 systemd[1]: Starting user@500.service... Oct 2 20:14:17.088394 (systemd)[1124]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:14:17.214768 systemd[1124]: Queued start job for default target default.target. Oct 2 20:14:17.215275 systemd[1124]: Reached target paths.target. Oct 2 20:14:17.215294 systemd[1124]: Reached target sockets.target. Oct 2 20:14:17.215309 systemd[1124]: Reached target timers.target. Oct 2 20:14:17.215322 systemd[1124]: Reached target basic.target. Oct 2 20:14:17.215417 systemd[1]: Started user@500.service. Oct 2 20:14:17.216279 systemd[1]: Started session-1.scope. Oct 2 20:14:17.216699 systemd[1124]: Reached target default.target. Oct 2 20:14:17.216848 systemd[1124]: Startup finished in 118ms. Oct 2 20:14:17.760464 systemd[1]: Started sshd@1-172.24.4.201:22-172.24.4.1:58206.service. Oct 2 20:14:19.504998 sshd[1133]: Accepted publickey for core from 172.24.4.1 port 58206 ssh2: RSA SHA256:q9+Ye9PJtNeEYaEmKUiAJUY+7d4xsgDWJfPGYyGwvrE Oct 2 20:14:19.508444 sshd[1133]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:14:19.519489 systemd-logind[1042]: New session 2 of user core. Oct 2 20:14:19.520997 systemd[1]: Started session-2.scope. Oct 2 20:14:20.150835 sshd[1133]: pam_unix(sshd:session): session closed for user core Oct 2 20:14:20.157763 systemd[1]: sshd@1-172.24.4.201:22-172.24.4.1:58206.service: Deactivated successfully. Oct 2 20:14:20.159432 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 20:14:20.161066 systemd-logind[1042]: Session 2 logged out. Waiting for processes to exit. Oct 2 20:14:20.164118 systemd[1]: Started sshd@2-172.24.4.201:22-172.24.4.1:58208.service. Oct 2 20:14:20.167115 systemd-logind[1042]: Removed session 2. Oct 2 20:14:21.269041 sshd[1139]: Accepted publickey for core from 172.24.4.1 port 58208 ssh2: RSA SHA256:q9+Ye9PJtNeEYaEmKUiAJUY+7d4xsgDWJfPGYyGwvrE Oct 2 20:14:21.271623 sshd[1139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:14:21.281047 systemd-logind[1042]: New session 3 of user core. Oct 2 20:14:21.281830 systemd[1]: Started session-3.scope. Oct 2 20:14:21.914923 sshd[1139]: pam_unix(sshd:session): session closed for user core Oct 2 20:14:21.923207 systemd[1]: Started sshd@3-172.24.4.201:22-172.24.4.1:58222.service. Oct 2 20:14:21.924372 systemd[1]: sshd@2-172.24.4.201:22-172.24.4.1:58208.service: Deactivated successfully. Oct 2 20:14:21.926423 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 20:14:21.928818 systemd-logind[1042]: Session 3 logged out. Waiting for processes to exit. Oct 2 20:14:21.931363 systemd-logind[1042]: Removed session 3. Oct 2 20:14:23.445534 sshd[1144]: Accepted publickey for core from 172.24.4.1 port 58222 ssh2: RSA SHA256:q9+Ye9PJtNeEYaEmKUiAJUY+7d4xsgDWJfPGYyGwvrE Oct 2 20:14:23.448539 sshd[1144]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:14:23.460531 systemd-logind[1042]: New session 4 of user core. Oct 2 20:14:23.461235 systemd[1]: Started session-4.scope. Oct 2 20:14:24.022845 sshd[1144]: pam_unix(sshd:session): session closed for user core Oct 2 20:14:24.032627 systemd[1]: Started sshd@4-172.24.4.201:22-172.24.4.1:58228.service. Oct 2 20:14:24.033914 systemd[1]: sshd@3-172.24.4.201:22-172.24.4.1:58222.service: Deactivated successfully. Oct 2 20:14:24.038080 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 20:14:24.040662 systemd-logind[1042]: Session 4 logged out. Waiting for processes to exit. Oct 2 20:14:24.043905 systemd-logind[1042]: Removed session 4. Oct 2 20:14:25.215043 sshd[1150]: Accepted publickey for core from 172.24.4.1 port 58228 ssh2: RSA SHA256:q9+Ye9PJtNeEYaEmKUiAJUY+7d4xsgDWJfPGYyGwvrE Oct 2 20:14:25.217915 sshd[1150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:14:25.227702 systemd-logind[1042]: New session 5 of user core. Oct 2 20:14:25.229538 systemd[1]: Started session-5.scope. Oct 2 20:14:25.688937 sudo[1154]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 20:14:25.690655 sudo[1154]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:14:25.701838 dbus-daemon[1032]: ЍDu>V: received setenforce notice (enforcing=-567990960) Oct 2 20:14:25.706959 sudo[1154]: pam_unix(sudo:session): session closed for user root Oct 2 20:14:25.939123 sshd[1150]: pam_unix(sshd:session): session closed for user core Oct 2 20:14:25.945665 systemd[1]: Started sshd@5-172.24.4.201:22-172.24.4.1:55472.service. Oct 2 20:14:25.949116 systemd[1]: sshd@4-172.24.4.201:22-172.24.4.1:58228.service: Deactivated successfully. Oct 2 20:14:25.950925 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 20:14:25.954505 systemd-logind[1042]: Session 5 logged out. Waiting for processes to exit. Oct 2 20:14:25.959832 systemd-logind[1042]: Removed session 5. Oct 2 20:14:27.226366 sshd[1157]: Accepted publickey for core from 172.24.4.1 port 55472 ssh2: RSA SHA256:q9+Ye9PJtNeEYaEmKUiAJUY+7d4xsgDWJfPGYyGwvrE Oct 2 20:14:27.229909 sshd[1157]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:14:27.240210 systemd-logind[1042]: New session 6 of user core. Oct 2 20:14:27.240617 systemd[1]: Started session-6.scope. Oct 2 20:14:27.700847 sudo[1162]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 20:14:27.701310 sudo[1162]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:14:27.707826 sudo[1162]: pam_unix(sudo:session): session closed for user root Oct 2 20:14:27.717555 sudo[1161]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 20:14:27.718129 sudo[1161]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:14:27.739781 systemd[1]: Stopping audit-rules.service... Oct 2 20:14:27.740000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 20:14:27.743486 kernel: kauditd_printk_skb: 129 callbacks suppressed Oct 2 20:14:27.743648 kernel: audit: type=1305 audit(1696277667.740:173): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 20:14:27.744111 auditctl[1165]: No rules Oct 2 20:14:27.745171 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 20:14:27.745685 systemd[1]: Stopped audit-rules.service. Oct 2 20:14:27.740000 audit[1165]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdbe788190 a2=420 a3=0 items=0 ppid=1 pid=1165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:27.759295 kernel: audit: type=1300 audit(1696277667.740:173): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdbe788190 a2=420 a3=0 items=0 ppid=1 pid=1165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:27.760998 systemd[1]: Starting audit-rules.service... Oct 2 20:14:27.740000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 20:14:27.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:27.777031 kernel: audit: type=1327 audit(1696277667.740:173): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 20:14:27.777139 kernel: audit: type=1131 audit(1696277667.743:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:27.801757 augenrules[1182]: No rules Oct 2 20:14:27.803327 systemd[1]: Finished audit-rules.service. Oct 2 20:14:27.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:27.816929 kernel: audit: type=1130 audit(1696277667.803:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:27.817062 kernel: audit: type=1106 audit(1696277667.813:176): pid=1161 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:14:27.813000 audit[1161]: USER_END pid=1161 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:14:27.815242 sudo[1161]: pam_unix(sudo:session): session closed for user root Oct 2 20:14:27.813000 audit[1161]: CRED_DISP pid=1161 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:14:27.827638 kernel: audit: type=1104 audit(1696277667.813:177): pid=1161 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:14:27.976180 sshd[1157]: pam_unix(sshd:session): session closed for user core Oct 2 20:14:27.980000 audit[1157]: USER_END pid=1157 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:14:27.983146 systemd[1]: Started sshd@6-172.24.4.201:22-172.24.4.1:55476.service. Oct 2 20:14:27.988127 systemd[1]: sshd@5-172.24.4.201:22-172.24.4.1:55472.service: Deactivated successfully. Oct 2 20:14:27.989498 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 20:14:28.001644 kernel: audit: type=1106 audit(1696277667.980:178): pid=1157 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:14:27.980000 audit[1157]: CRED_DISP pid=1157 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:14:28.013958 systemd-logind[1042]: Session 6 logged out. Waiting for processes to exit. Oct 2 20:14:28.015087 kernel: audit: type=1104 audit(1696277667.980:179): pid=1157 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:14:28.015765 kernel: audit: type=1130 audit(1696277667.981:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.201:22-172.24.4.1:55476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:27.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.201:22-172.24.4.1:55476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:27.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.24.4.201:22-172.24.4.1:55472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:28.028781 systemd-logind[1042]: Removed session 6. Oct 2 20:14:29.158000 audit[1187]: USER_ACCT pid=1187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:14:29.161024 sshd[1187]: Accepted publickey for core from 172.24.4.1 port 55476 ssh2: RSA SHA256:q9+Ye9PJtNeEYaEmKUiAJUY+7d4xsgDWJfPGYyGwvrE Oct 2 20:14:29.162000 audit[1187]: CRED_ACQ pid=1187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:14:29.162000 audit[1187]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffad1a2120 a2=3 a3=0 items=0 ppid=1 pid=1187 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:29.162000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 20:14:29.164380 sshd[1187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:14:29.173437 systemd-logind[1042]: New session 7 of user core. Oct 2 20:14:29.175845 systemd[1]: Started session-7.scope. Oct 2 20:14:29.188000 audit[1187]: USER_START pid=1187 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:14:29.191000 audit[1190]: CRED_ACQ pid=1190 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:14:29.594000 audit[1191]: USER_ACCT pid=1191 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:14:29.597073 sudo[1191]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 20:14:29.596000 audit[1191]: CRED_REFR pid=1191 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:14:29.598385 sudo[1191]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:14:29.600000 audit[1191]: USER_START pid=1191 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:14:30.256984 systemd[1]: Reloading. Oct 2 20:14:30.430251 /usr/lib/systemd/system-generators/torcx-generator[1223]: time="2023-10-02T20:14:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 20:14:30.430284 /usr/lib/systemd/system-generators/torcx-generator[1223]: time="2023-10-02T20:14:30Z" level=info msg="torcx already run" Oct 2 20:14:30.520109 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 20:14:30.520129 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 20:14:30.547867 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 20:14:30.615000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.615000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.615000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.615000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.615000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.615000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.615000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.615000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.615000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.615000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.615000 audit: BPF prog-id=37 op=LOAD Oct 2 20:14:30.615000 audit: BPF prog-id=31 op=UNLOAD Oct 2 20:14:30.616000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.616000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.616000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.616000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.616000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.616000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.616000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.616000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.616000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.616000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.616000 audit: BPF prog-id=38 op=LOAD Oct 2 20:14:30.616000 audit: BPF prog-id=30 op=UNLOAD Oct 2 20:14:30.619000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.619000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.619000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.619000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.619000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.619000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.619000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.619000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.619000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.619000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.619000 audit: BPF prog-id=39 op=LOAD Oct 2 20:14:30.619000 audit: BPF prog-id=27 op=UNLOAD Oct 2 20:14:30.620000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.620000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.620000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.620000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.620000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.620000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.620000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.620000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.620000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.620000 audit: BPF prog-id=40 op=LOAD Oct 2 20:14:30.620000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.620000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.620000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.620000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.620000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.620000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.620000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.620000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.620000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.620000 audit: BPF prog-id=41 op=LOAD Oct 2 20:14:30.620000 audit: BPF prog-id=28 op=UNLOAD Oct 2 20:14:30.620000 audit: BPF prog-id=29 op=UNLOAD Oct 2 20:14:30.621000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.621000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.622000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.622000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.622000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.622000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.622000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.622000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.622000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.622000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.622000 audit: BPF prog-id=42 op=LOAD Oct 2 20:14:30.622000 audit: BPF prog-id=26 op=UNLOAD Oct 2 20:14:30.623000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.623000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.623000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.623000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.623000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.623000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.623000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.623000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.623000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.623000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.623000 audit: BPF prog-id=43 op=LOAD Oct 2 20:14:30.624000 audit: BPF prog-id=21 op=UNLOAD Oct 2 20:14:30.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.624000 audit: BPF prog-id=44 op=LOAD Oct 2 20:14:30.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.624000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.624000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.624000 audit: BPF prog-id=45 op=LOAD Oct 2 20:14:30.624000 audit: BPF prog-id=22 op=UNLOAD Oct 2 20:14:30.624000 audit: BPF prog-id=23 op=UNLOAD Oct 2 20:14:30.625000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.625000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.625000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.625000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.625000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.625000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.625000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.625000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.625000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.625000 audit: BPF prog-id=46 op=LOAD Oct 2 20:14:30.625000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.625000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.625000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.625000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.625000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.625000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.625000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.625000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.625000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.625000 audit: BPF prog-id=47 op=LOAD Oct 2 20:14:30.625000 audit: BPF prog-id=24 op=UNLOAD Oct 2 20:14:30.625000 audit: BPF prog-id=25 op=UNLOAD Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit: BPF prog-id=48 op=LOAD Oct 2 20:14:30.626000 audit: BPF prog-id=32 op=UNLOAD Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit: BPF prog-id=49 op=LOAD Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.626000 audit: BPF prog-id=50 op=LOAD Oct 2 20:14:30.626000 audit: BPF prog-id=33 op=UNLOAD Oct 2 20:14:30.626000 audit: BPF prog-id=34 op=UNLOAD Oct 2 20:14:30.627000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.627000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.627000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.627000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.627000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.627000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.627000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.627000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.627000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.627000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:30.627000 audit: BPF prog-id=51 op=LOAD Oct 2 20:14:30.627000 audit: BPF prog-id=35 op=UNLOAD Oct 2 20:14:30.638588 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 20:14:30.645431 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 20:14:30.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:30.646050 systemd[1]: Reached target network-online.target. Oct 2 20:14:30.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:30.647547 systemd[1]: Started kubelet.service. Oct 2 20:14:30.659386 systemd[1]: Starting coreos-metadata.service... Oct 2 20:14:30.713635 coreos-metadata[1274]: Oct 02 20:14:30.713 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 2 20:14:30.721129 kubelet[1267]: E1002 20:14:30.721076 1267 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 20:14:30.723473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 20:14:30.723642 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 20:14:30.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 20:14:31.066387 coreos-metadata[1274]: Oct 02 20:14:31.066 INFO Fetch successful Oct 2 20:14:31.066387 coreos-metadata[1274]: Oct 02 20:14:31.066 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Oct 2 20:14:31.085094 coreos-metadata[1274]: Oct 02 20:14:31.085 INFO Fetch successful Oct 2 20:14:31.085094 coreos-metadata[1274]: Oct 02 20:14:31.085 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Oct 2 20:14:31.100862 coreos-metadata[1274]: Oct 02 20:14:31.100 INFO Fetch successful Oct 2 20:14:31.100862 coreos-metadata[1274]: Oct 02 20:14:31.100 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Oct 2 20:14:31.119835 coreos-metadata[1274]: Oct 02 20:14:31.119 INFO Fetch successful Oct 2 20:14:31.119835 coreos-metadata[1274]: Oct 02 20:14:31.119 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Oct 2 20:14:31.136062 coreos-metadata[1274]: Oct 02 20:14:31.135 INFO Fetch successful Oct 2 20:14:31.152642 systemd[1]: Finished coreos-metadata.service. Oct 2 20:14:31.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:31.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:31.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:31.894894 systemd[1]: Stopped kubelet.service. Oct 2 20:14:31.935271 systemd[1]: Reloading. Oct 2 20:14:32.077953 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2023-10-02T20:14:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 20:14:32.077985 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2023-10-02T20:14:32Z" level=info msg="torcx already run" Oct 2 20:14:32.192236 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 20:14:32.192660 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 20:14:32.218606 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 20:14:32.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.281000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.281000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.282000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.282000 audit: BPF prog-id=52 op=LOAD Oct 2 20:14:32.283000 audit: BPF prog-id=37 op=UNLOAD Oct 2 20:14:32.284000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.284000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.284000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.284000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.284000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.284000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.284000 audit: BPF prog-id=53 op=LOAD Oct 2 20:14:32.284000 audit: BPF prog-id=38 op=UNLOAD Oct 2 20:14:32.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.287000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.287000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.288000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.288000 audit: BPF prog-id=54 op=LOAD Oct 2 20:14:32.288000 audit: BPF prog-id=39 op=UNLOAD Oct 2 20:14:32.288000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.288000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.288000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.288000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.288000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.288000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.288000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.288000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.289000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.289000 audit: BPF prog-id=55 op=LOAD Oct 2 20:14:32.289000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.289000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.289000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.289000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.289000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.289000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.289000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.289000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.290000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.290000 audit: BPF prog-id=56 op=LOAD Oct 2 20:14:32.290000 audit: BPF prog-id=40 op=UNLOAD Oct 2 20:14:32.290000 audit: BPF prog-id=41 op=UNLOAD Oct 2 20:14:32.291000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.291000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.291000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.291000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.291000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.291000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.291000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.291000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.291000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.292000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.292000 audit: BPF prog-id=57 op=LOAD Oct 2 20:14:32.292000 audit: BPF prog-id=42 op=UNLOAD Oct 2 20:14:32.293000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.293000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.293000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.293000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.293000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.293000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.293000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.293000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.293000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.293000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.293000 audit: BPF prog-id=58 op=LOAD Oct 2 20:14:32.294000 audit: BPF prog-id=43 op=UNLOAD Oct 2 20:14:32.294000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.294000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.294000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.294000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.294000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.294000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.294000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.294000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.294000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.294000 audit: BPF prog-id=59 op=LOAD Oct 2 20:14:32.294000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.294000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.294000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.294000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.294000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.294000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.294000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.294000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.295000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.295000 audit: BPF prog-id=60 op=LOAD Oct 2 20:14:32.295000 audit: BPF prog-id=44 op=UNLOAD Oct 2 20:14:32.295000 audit: BPF prog-id=45 op=UNLOAD Oct 2 20:14:32.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.296000 audit: BPF prog-id=61 op=LOAD Oct 2 20:14:32.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.297000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.297000 audit: BPF prog-id=62 op=LOAD Oct 2 20:14:32.297000 audit: BPF prog-id=46 op=UNLOAD Oct 2 20:14:32.297000 audit: BPF prog-id=47 op=UNLOAD Oct 2 20:14:32.298000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.298000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.298000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.298000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.298000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.299000 audit: BPF prog-id=63 op=LOAD Oct 2 20:14:32.299000 audit: BPF prog-id=48 op=UNLOAD Oct 2 20:14:32.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.299000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.299000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.299000 audit: BPF prog-id=64 op=LOAD Oct 2 20:14:32.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.300000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.300000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.300000 audit: BPF prog-id=65 op=LOAD Oct 2 20:14:32.300000 audit: BPF prog-id=49 op=UNLOAD Oct 2 20:14:32.300000 audit: BPF prog-id=50 op=UNLOAD Oct 2 20:14:32.301000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.301000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.301000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.301000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.301000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.301000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.301000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.301000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.301000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.302000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.302000 audit: BPF prog-id=66 op=LOAD Oct 2 20:14:32.302000 audit: BPF prog-id=51 op=UNLOAD Oct 2 20:14:32.317302 systemd[1]: Started kubelet.service. Oct 2 20:14:32.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:32.364434 kubelet[1379]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 20:14:32.364434 kubelet[1379]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 20:14:32.364434 kubelet[1379]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 20:14:32.364967 kubelet[1379]: I1002 20:14:32.364472 1379 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 20:14:32.365991 kubelet[1379]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 20:14:32.365991 kubelet[1379]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 20:14:32.365991 kubelet[1379]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 20:14:32.929101 kubelet[1379]: I1002 20:14:32.929061 1379 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 20:14:32.929101 kubelet[1379]: I1002 20:14:32.929087 1379 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 20:14:32.929374 kubelet[1379]: I1002 20:14:32.929329 1379 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 20:14:32.932071 kubelet[1379]: I1002 20:14:32.932037 1379 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 20:14:32.933819 kubelet[1379]: I1002 20:14:32.933790 1379 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 20:14:32.933979 kubelet[1379]: I1002 20:14:32.933952 1379 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 20:14:32.934052 kubelet[1379]: I1002 20:14:32.934020 1379 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 20:14:32.934052 kubelet[1379]: I1002 20:14:32.934039 1379 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 20:14:32.934052 kubelet[1379]: I1002 20:14:32.934050 1379 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 20:14:32.934336 kubelet[1379]: I1002 20:14:32.934130 1379 state_mem.go:36] "Initialized new in-memory state store" Oct 2 20:14:32.937206 kubelet[1379]: I1002 20:14:32.937172 1379 kubelet.go:381] "Attempting to sync node with API server" Oct 2 20:14:32.937206 kubelet[1379]: I1002 20:14:32.937196 1379 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 20:14:32.937206 kubelet[1379]: I1002 20:14:32.937213 1379 kubelet.go:281] "Adding apiserver pod source" Oct 2 20:14:32.937425 kubelet[1379]: I1002 20:14:32.937223 1379 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 20:14:32.937913 kubelet[1379]: E1002 20:14:32.937884 1379 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:32.938013 kubelet[1379]: E1002 20:14:32.937923 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:32.938747 kubelet[1379]: I1002 20:14:32.938714 1379 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 20:14:32.939000 kubelet[1379]: W1002 20:14:32.938969 1379 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 20:14:32.939365 kubelet[1379]: I1002 20:14:32.939332 1379 server.go:1175] "Started kubelet" Oct 2 20:14:32.945420 kubelet[1379]: E1002 20:14:32.945390 1379 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 20:14:32.945420 kubelet[1379]: E1002 20:14:32.945414 1379 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 20:14:32.946440 kubelet[1379]: I1002 20:14:32.946398 1379 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 20:14:32.947067 kubelet[1379]: I1002 20:14:32.947017 1379 server.go:438] "Adding debug handlers to kubelet server" Oct 2 20:14:32.956935 kernel: kauditd_printk_skb: 362 callbacks suppressed Oct 2 20:14:32.957060 kernel: audit: type=1400 audit(1696277672.946:541): avc: denied { mac_admin } for pid=1379 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.946000 audit[1379]: AVC avc: denied { mac_admin } for pid=1379 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.957332 kubelet[1379]: I1002 20:14:32.957297 1379 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 20:14:32.958670 kubelet[1379]: I1002 20:14:32.958657 1379 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 20:14:32.946000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:14:32.958952 kubelet[1379]: I1002 20:14:32.958941 1379 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 20:14:32.963155 kernel: audit: type=1401 audit(1696277672.946:541): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:14:32.946000 audit[1379]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000675cb0 a1=c00090e8d0 a2=c000675c80 a3=25 items=0 ppid=1 pid=1379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:32.964806 kubelet[1379]: I1002 20:14:32.964790 1379 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 20:14:32.964945 kubelet[1379]: I1002 20:14:32.964933 1379 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 20:14:32.966038 kubelet[1379]: E1002 20:14:32.966023 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:14:32.967150 kubelet[1379]: E1002 20:14:32.967073 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e67f93e4e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 32, 939306574, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 32, 939306574, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:32.967475 kubelet[1379]: W1002 20:14:32.967459 1379 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:14:32.967599 kubelet[1379]: E1002 20:14:32.967566 1379 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:14:32.967715 kubelet[1379]: W1002 20:14:32.967702 1379 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.24.4.201" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:14:32.967809 kubelet[1379]: E1002 20:14:32.967799 1379 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.201" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:14:32.975563 kernel: audit: type=1300 audit(1696277672.946:541): arch=c000003e syscall=188 success=no exit=-22 a0=c000675cb0 a1=c00090e8d0 a2=c000675c80 a3=25 items=0 ppid=1 pid=1379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:32.975784 kernel: audit: type=1327 audit(1696277672.946:541): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:14:32.946000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:14:32.957000 audit[1379]: AVC avc: denied { mac_admin } for pid=1379 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.996642 kernel: audit: type=1400 audit(1696277672.957:542): avc: denied { mac_admin } for pid=1379 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:32.996802 kernel: audit: type=1401 audit(1696277672.957:542): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:14:32.957000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:14:33.000873 kubelet[1379]: W1002 20:14:33.000855 1379 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:14:33.000998 kubelet[1379]: E1002 20:14:33.000988 1379 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:14:33.001107 kubelet[1379]: E1002 20:14:33.001094 1379 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "172.24.4.201" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:14:32.957000 audit[1379]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0007f42e0 a1=c00005ce88 a2=c000674060 a3=25 items=0 ppid=1 pid=1379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.001436 kubelet[1379]: E1002 20:14:33.001210 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e68564d76", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 32, 945405302, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 32, 945405302, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:33.014008 kernel: audit: type=1300 audit(1696277672.957:542): arch=c000003e syscall=188 success=no exit=-22 a0=c0007f42e0 a1=c00005ce88 a2=c000674060 a3=25 items=0 ppid=1 pid=1379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.014951 kubelet[1379]: E1002 20:14:33.014762 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb7081c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.201 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2076188, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2076188, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:32.957000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:14:33.023583 kernel: audit: type=1327 audit(1696277672.957:542): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:14:33.023648 kubelet[1379]: I1002 20:14:33.023629 1379 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 20:14:33.023648 kubelet[1379]: I1002 20:14:33.023649 1379 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 20:14:33.023737 kubelet[1379]: I1002 20:14:33.023664 1379 state_mem.go:36] "Initialized new in-memory state store" Oct 2 20:14:33.024108 kubelet[1379]: E1002 20:14:33.024027 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb726d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.201 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2084056, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2084056, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:33.026622 kubelet[1379]: E1002 20:14:33.026476 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb73256", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.201 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2086998, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2086998, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:33.032582 kubelet[1379]: I1002 20:14:33.032535 1379 policy_none.go:49] "None policy: Start" Oct 2 20:14:33.033363 kubelet[1379]: I1002 20:14:33.033349 1379 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 20:14:33.033447 kubelet[1379]: I1002 20:14:33.033437 1379 state_mem.go:35] "Initializing new in-memory state store" Oct 2 20:14:33.042233 systemd[1]: Created slice kubepods.slice. Oct 2 20:14:33.046820 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 20:14:33.049941 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 20:14:33.055000 audit[1396]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1396 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:33.060599 kernel: audit: type=1325 audit(1696277673.055:543): table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1396 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:33.055000 audit[1396]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff0a6dca60 a2=0 a3=7fff0a6dca4c items=0 ppid=1379 pid=1396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.055000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 20:14:33.056000 audit[1399]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1399 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:33.056000 audit[1399]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffc0cf8e700 a2=0 a3=7ffc0cf8e6ec items=0 ppid=1379 pid=1399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.056000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 20:14:33.066950 kubelet[1379]: E1002 20:14:33.066925 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:33.067729 kernel: audit: type=1300 audit(1696277673.055:543): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff0a6dca60 a2=0 a3=7fff0a6dca4c items=0 ppid=1379 pid=1396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.068131 kubelet[1379]: I1002 20:14:33.068118 1379 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 20:14:33.068277 kubelet[1379]: I1002 20:14:33.068264 1379 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 20:14:33.068517 kubelet[1379]: I1002 20:14:33.068504 1379 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 20:14:33.066000 audit[1379]: AVC avc: denied { mac_admin } for pid=1379 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:33.066000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:14:33.066000 audit[1379]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000f32480 a1=c000f40138 a2=c000f32450 a3=25 items=0 ppid=1 pid=1379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.066000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:14:33.069037 kubelet[1379]: I1002 20:14:33.068162 1379 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.201" Oct 2 20:14:33.070101 kubelet[1379]: E1002 20:14:33.070080 1379 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.201\" not found" Oct 2 20:14:33.071039 kubelet[1379]: E1002 20:14:33.070975 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb7081c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.201 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2076188, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 68126710, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.201.178a638e6bb7081c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:33.071527 kubelet[1379]: E1002 20:14:33.071276 1379 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.201" Oct 2 20:14:33.072781 kubelet[1379]: E1002 20:14:33.072726 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb726d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.201 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2084056, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 68132374, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.201.178a638e6bb726d8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:33.074121 kubelet[1379]: E1002 20:14:33.074074 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb73256", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.201 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2086998, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 68135868, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.201.178a638e6bb73256" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:33.075538 kubelet[1379]: E1002 20:14:33.075496 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6fd6e594", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 71273364, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 71273364, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:33.058000 audit[1401]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1401 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:33.058000 audit[1401]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff26c6c230 a2=0 a3=7fff26c6c21c items=0 ppid=1379 pid=1401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.058000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 20:14:33.083000 audit[1407]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1407 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:33.083000 audit[1407]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffed972d3c0 a2=0 a3=7ffed972d3ac items=0 ppid=1379 pid=1407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.083000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 20:14:33.135000 audit[1412]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1412 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:33.135000 audit[1412]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcbecaa100 a2=0 a3=7ffcbecaa0ec items=0 ppid=1379 pid=1412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.135000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 20:14:33.136000 audit[1413]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1413 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:33.136000 audit[1413]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd4a606c80 a2=0 a3=7ffd4a606c6c items=0 ppid=1379 pid=1413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.136000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 20:14:33.144000 audit[1416]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1416 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:33.144000 audit[1416]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe046eccc0 a2=0 a3=7ffe046eccac items=0 ppid=1379 pid=1416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.144000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 20:14:33.149000 audit[1419]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1419 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:33.149000 audit[1419]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7fffa11d3200 a2=0 a3=7fffa11d31ec items=0 ppid=1379 pid=1419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.149000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 20:14:33.150000 audit[1420]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1420 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:33.150000 audit[1420]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffaba596b0 a2=0 a3=7fffaba5969c items=0 ppid=1379 pid=1420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.150000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 20:14:33.151000 audit[1421]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1421 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:33.151000 audit[1421]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe697eb260 a2=0 a3=7ffe697eb24c items=0 ppid=1379 pid=1421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.151000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 20:14:33.154000 audit[1423]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:33.154000 audit[1423]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff88de4a70 a2=0 a3=7fff88de4a5c items=0 ppid=1379 pid=1423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.154000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 20:14:33.168103 kubelet[1379]: E1002 20:14:33.168025 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:33.158000 audit[1425]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1425 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:33.158000 audit[1425]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd69b55f40 a2=0 a3=7ffd69b55f2c items=0 ppid=1379 pid=1425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.158000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 20:14:33.185000 audit[1428]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:33.185000 audit[1428]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffc45df0530 a2=0 a3=7ffc45df051c items=0 ppid=1379 pid=1428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 20:14:33.188000 audit[1430]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:33.188000 audit[1430]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffed3e79f00 a2=0 a3=7ffed3e79eec items=0 ppid=1379 pid=1430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.188000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 20:14:33.202899 kubelet[1379]: E1002 20:14:33.202866 1379 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "172.24.4.201" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:14:33.210000 audit[1433]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1433 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:33.210000 audit[1433]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffd92447870 a2=0 a3=7ffd9244785c items=0 ppid=1379 pid=1433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.210000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 20:14:33.212741 kubelet[1379]: I1002 20:14:33.212707 1379 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 20:14:33.212000 audit[1435]: NETFILTER_CFG table=mangle:17 family=2 entries=1 op=nft_register_chain pid=1435 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:33.212000 audit[1435]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc44cc61f0 a2=0 a3=7ffc44cc61dc items=0 ppid=1379 pid=1435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.212000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 20:14:33.212000 audit[1434]: NETFILTER_CFG table=mangle:18 family=10 entries=2 op=nft_register_chain pid=1434 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:33.212000 audit[1434]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffe8a20e60 a2=0 a3=7fffe8a20e4c items=0 ppid=1379 pid=1434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.212000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 20:14:33.214000 audit[1436]: NETFILTER_CFG table=nat:19 family=2 entries=1 op=nft_register_chain pid=1436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:33.214000 audit[1436]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5848ccb0 a2=0 a3=7fff5848cc9c items=0 ppid=1379 pid=1436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.214000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 20:14:33.214000 audit[1437]: NETFILTER_CFG table=nat:20 family=10 entries=2 op=nft_register_chain pid=1437 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:33.214000 audit[1437]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe2b697bf0 a2=0 a3=7ffe2b697bdc items=0 ppid=1379 pid=1437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.214000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 20:14:33.215000 audit[1438]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:33.215000 audit[1438]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeab83f190 a2=0 a3=7ffeab83f17c items=0 ppid=1379 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.215000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 20:14:33.220000 audit[1440]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1440 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:33.220000 audit[1440]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff42597140 a2=0 a3=7fff4259712c items=0 ppid=1379 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.220000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 20:14:33.221000 audit[1441]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1441 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:33.221000 audit[1441]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffe2f8dfaf0 a2=0 a3=7ffe2f8dfadc items=0 ppid=1379 pid=1441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.221000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 20:14:33.223000 audit[1443]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1443 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:33.223000 audit[1443]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffdfaf18420 a2=0 a3=7ffdfaf1840c items=0 ppid=1379 pid=1443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.223000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 20:14:33.225000 audit[1444]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1444 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:33.225000 audit[1444]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe896c0c50 a2=0 a3=7ffe896c0c3c items=0 ppid=1379 pid=1444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.225000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 20:14:33.227000 audit[1445]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1445 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:33.227000 audit[1445]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0954f1f0 a2=0 a3=7ffd0954f1dc items=0 ppid=1379 pid=1445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.227000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 20:14:33.229000 audit[1447]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1447 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:33.229000 audit[1447]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc0d6c0770 a2=0 a3=7ffc0d6c075c items=0 ppid=1379 pid=1447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.229000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 20:14:33.231000 audit[1449]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1449 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:33.231000 audit[1449]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc677b0b20 a2=0 a3=7ffc677b0b0c items=0 ppid=1379 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.231000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 20:14:33.233000 audit[1451]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1451 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:33.233000 audit[1451]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffc85ea4990 a2=0 a3=7ffc85ea497c items=0 ppid=1379 pid=1451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.233000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 20:14:33.236000 audit[1453]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1453 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:33.236000 audit[1453]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffd68a10f00 a2=0 a3=7ffd68a10eec items=0 ppid=1379 pid=1453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.236000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 20:14:33.238000 audit[1455]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1455 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:33.238000 audit[1455]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffe17687ff0 a2=0 a3=7ffe17687fdc items=0 ppid=1379 pid=1455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.238000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 20:14:33.241067 kubelet[1379]: I1002 20:14:33.241025 1379 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 20:14:33.241439 kubelet[1379]: I1002 20:14:33.241416 1379 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 20:14:33.241512 kubelet[1379]: I1002 20:14:33.241493 1379 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 20:14:33.241855 kubelet[1379]: E1002 20:14:33.241827 1379 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 20:14:33.241000 audit[1456]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1456 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:33.241000 audit[1456]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3e67c120 a2=0 a3=7ffc3e67c10c items=0 ppid=1379 pid=1456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.241000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 20:14:33.242000 audit[1457]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1457 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:33.242000 audit[1457]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbb72a880 a2=0 a3=7ffcbb72a86c items=0 ppid=1379 pid=1457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.242000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 20:14:33.244802 kubelet[1379]: W1002 20:14:33.244784 1379 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:14:33.244890 kubelet[1379]: E1002 20:14:33.244880 1379 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:14:33.245000 audit[1458]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1458 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:33.245000 audit[1458]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff90066440 a2=0 a3=7fff9006642c items=0 ppid=1379 pid=1458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:33.245000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 20:14:33.269119 kubelet[1379]: E1002 20:14:33.269087 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:33.273309 kubelet[1379]: I1002 20:14:33.273285 1379 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.201" Oct 2 20:14:33.276734 kubelet[1379]: E1002 20:14:33.276671 1379 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.201" Oct 2 20:14:33.276984 kubelet[1379]: E1002 20:14:33.276867 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb7081c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.201 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2076188, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 273238827, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.201.178a638e6bb7081c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:33.278773 kubelet[1379]: E1002 20:14:33.278660 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb726d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.201 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2084056, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 273246775, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.201.178a638e6bb726d8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:33.347110 kubelet[1379]: E1002 20:14:33.346943 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb73256", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.201 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2086998, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 273251860, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.201.178a638e6bb73256" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:33.369462 kubelet[1379]: E1002 20:14:33.369394 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:33.474657 kubelet[1379]: E1002 20:14:33.470026 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:33.571145 kubelet[1379]: E1002 20:14:33.571085 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:33.606368 kubelet[1379]: E1002 20:14:33.606288 1379 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "172.24.4.201" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:14:33.672088 kubelet[1379]: E1002 20:14:33.671960 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:33.678557 kubelet[1379]: I1002 20:14:33.678524 1379 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.201" Oct 2 20:14:33.681243 kubelet[1379]: E1002 20:14:33.681199 1379 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.201" Oct 2 20:14:33.681706 kubelet[1379]: E1002 20:14:33.681538 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb7081c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.201 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2076188, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 678465555, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.201.178a638e6bb7081c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:33.766172 kubelet[1379]: E1002 20:14:33.765293 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb726d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.201 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2084056, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 678475625, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.201.178a638e6bb726d8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:33.772638 kubelet[1379]: E1002 20:14:33.772541 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:33.821778 kubelet[1379]: W1002 20:14:33.821728 1379 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:14:33.822130 kubelet[1379]: E1002 20:14:33.822060 1379 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:14:33.822296 kubelet[1379]: W1002 20:14:33.821761 1379 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:14:33.822481 kubelet[1379]: E1002 20:14:33.822458 1379 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:14:33.873360 kubelet[1379]: E1002 20:14:33.873269 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:33.938274 kubelet[1379]: E1002 20:14:33.938096 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:33.947225 kubelet[1379]: E1002 20:14:33.947050 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb73256", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.201 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2086998, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 678481790, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.201.178a638e6bb73256" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:33.973631 kubelet[1379]: E1002 20:14:33.973520 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:34.073758 kubelet[1379]: E1002 20:14:34.073693 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:34.174607 kubelet[1379]: E1002 20:14:34.174398 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:34.275364 kubelet[1379]: E1002 20:14:34.275231 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:34.376640 kubelet[1379]: E1002 20:14:34.376142 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:34.409562 kubelet[1379]: E1002 20:14:34.409482 1379 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "172.24.4.201" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:14:34.443381 kubelet[1379]: W1002 20:14:34.443307 1379 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.24.4.201" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:14:34.443381 kubelet[1379]: E1002 20:14:34.443367 1379 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.201" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:14:34.477039 kubelet[1379]: E1002 20:14:34.476904 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:34.483288 kubelet[1379]: I1002 20:14:34.483243 1379 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.201" Oct 2 20:14:34.485147 kubelet[1379]: E1002 20:14:34.485081 1379 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.201" Oct 2 20:14:34.485447 kubelet[1379]: E1002 20:14:34.485325 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb7081c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.201 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2076188, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 34, 483187925, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.201.178a638e6bb7081c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:34.487043 kubelet[1379]: E1002 20:14:34.486936 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb726d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.201 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2084056, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 34, 483197375, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.201.178a638e6bb726d8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:34.546994 kubelet[1379]: E1002 20:14:34.546860 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb73256", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.201 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2086998, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 34, 483203503, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.201.178a638e6bb73256" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:34.577495 kubelet[1379]: E1002 20:14:34.577456 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:34.679611 kubelet[1379]: E1002 20:14:34.678313 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:34.687045 kubelet[1379]: W1002 20:14:34.687009 1379 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:14:34.687262 kubelet[1379]: E1002 20:14:34.687237 1379 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:14:34.781035 kubelet[1379]: E1002 20:14:34.780969 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:34.881917 kubelet[1379]: E1002 20:14:34.881864 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:34.939425 kubelet[1379]: E1002 20:14:34.938834 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:34.982112 kubelet[1379]: E1002 20:14:34.982067 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:35.083444 kubelet[1379]: E1002 20:14:35.083388 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:35.184658 kubelet[1379]: E1002 20:14:35.184423 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:35.286163 kubelet[1379]: E1002 20:14:35.285470 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:35.386378 kubelet[1379]: E1002 20:14:35.386279 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:35.487426 kubelet[1379]: E1002 20:14:35.487301 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:35.588352 kubelet[1379]: E1002 20:14:35.588232 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:35.689362 kubelet[1379]: E1002 20:14:35.689234 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:35.790293 kubelet[1379]: E1002 20:14:35.790212 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:35.891355 kubelet[1379]: E1002 20:14:35.891133 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:35.939922 kubelet[1379]: E1002 20:14:35.939786 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:35.991716 kubelet[1379]: E1002 20:14:35.991635 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:36.011993 kubelet[1379]: E1002 20:14:36.011920 1379 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "172.24.4.201" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:14:36.088215 kubelet[1379]: I1002 20:14:36.088121 1379 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.201" Oct 2 20:14:36.089928 kubelet[1379]: E1002 20:14:36.089730 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb7081c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.201 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2076188, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 36, 87175496, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.201.178a638e6bb7081c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:36.090527 kubelet[1379]: E1002 20:14:36.090497 1379 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.201" Oct 2 20:14:36.091671 kubelet[1379]: E1002 20:14:36.091504 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb726d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.201 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2084056, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 36, 87207177, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.201.178a638e6bb726d8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:36.092019 kubelet[1379]: E1002 20:14:36.091982 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:36.093358 kubelet[1379]: E1002 20:14:36.093209 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb73256", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.201 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2086998, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 36, 87218851, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.201.178a638e6bb73256" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:36.194328 kubelet[1379]: E1002 20:14:36.193083 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:36.262677 kubelet[1379]: W1002 20:14:36.262604 1379 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:14:36.262677 kubelet[1379]: E1002 20:14:36.262669 1379 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:14:36.272382 kubelet[1379]: W1002 20:14:36.272309 1379 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:14:36.272382 kubelet[1379]: E1002 20:14:36.272367 1379 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:14:36.293988 kubelet[1379]: E1002 20:14:36.293942 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:36.395031 kubelet[1379]: E1002 20:14:36.394961 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:36.496844 kubelet[1379]: E1002 20:14:36.496057 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:36.597202 kubelet[1379]: E1002 20:14:36.597149 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:36.698306 kubelet[1379]: E1002 20:14:36.698248 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:36.799440 kubelet[1379]: E1002 20:14:36.799224 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:36.834306 kubelet[1379]: W1002 20:14:36.834239 1379 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.24.4.201" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:14:36.834306 kubelet[1379]: E1002 20:14:36.834299 1379 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.201" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:14:36.900147 kubelet[1379]: E1002 20:14:36.900099 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:36.940851 kubelet[1379]: E1002 20:14:36.940698 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:37.000627 kubelet[1379]: E1002 20:14:37.000445 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:37.100757 kubelet[1379]: E1002 20:14:37.100620 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:37.201667 kubelet[1379]: E1002 20:14:37.201544 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:37.302857 kubelet[1379]: E1002 20:14:37.302666 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:37.403800 kubelet[1379]: E1002 20:14:37.403624 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:37.504873 kubelet[1379]: E1002 20:14:37.504749 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:37.605829 kubelet[1379]: E1002 20:14:37.605630 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:37.673998 kubelet[1379]: W1002 20:14:37.673781 1379 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:14:37.673998 kubelet[1379]: E1002 20:14:37.673883 1379 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:14:37.706444 kubelet[1379]: E1002 20:14:37.706404 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:37.807759 kubelet[1379]: E1002 20:14:37.807676 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:37.908800 kubelet[1379]: E1002 20:14:37.908731 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:37.941446 kubelet[1379]: E1002 20:14:37.941299 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:38.009650 kubelet[1379]: E1002 20:14:38.009609 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:38.070474 kubelet[1379]: E1002 20:14:38.070401 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:14:38.110138 kubelet[1379]: E1002 20:14:38.110004 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:38.211198 kubelet[1379]: E1002 20:14:38.210945 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:38.311501 kubelet[1379]: E1002 20:14:38.311428 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:38.412670 kubelet[1379]: E1002 20:14:38.412617 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:38.513941 kubelet[1379]: E1002 20:14:38.513785 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:38.614723 kubelet[1379]: E1002 20:14:38.614669 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:38.715793 kubelet[1379]: E1002 20:14:38.715714 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:38.816990 kubelet[1379]: E1002 20:14:38.816822 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:38.917955 kubelet[1379]: E1002 20:14:38.917864 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:38.942386 kubelet[1379]: E1002 20:14:38.942349 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:39.019051 kubelet[1379]: E1002 20:14:39.018969 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:39.120302 kubelet[1379]: E1002 20:14:39.120239 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:39.214284 kubelet[1379]: E1002 20:14:39.214204 1379 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "172.24.4.201" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:14:39.221299 kubelet[1379]: E1002 20:14:39.221264 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:39.293032 kubelet[1379]: I1002 20:14:39.292990 1379 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.201" Oct 2 20:14:39.296176 kubelet[1379]: E1002 20:14:39.296103 1379 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.24.4.201" Oct 2 20:14:39.296526 kubelet[1379]: E1002 20:14:39.296330 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb7081c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.24.4.201 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2076188, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 39, 292845434, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.201.178a638e6bb7081c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:39.298858 kubelet[1379]: E1002 20:14:39.298729 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb726d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.24.4.201 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2084056, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 39, 292867957, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.201.178a638e6bb726d8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:39.301248 kubelet[1379]: E1002 20:14:39.301117 1379 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.24.4.201.178a638e6bb73256", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.24.4.201", UID:"172.24.4.201", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.24.4.201 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.24.4.201"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 14, 33, 2086998, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 14, 39, 292947133, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "172.24.4.201.178a638e6bb73256" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:14:39.321664 kubelet[1379]: E1002 20:14:39.321608 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:39.422795 kubelet[1379]: E1002 20:14:39.422520 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:39.524711 kubelet[1379]: E1002 20:14:39.524626 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:39.624911 kubelet[1379]: E1002 20:14:39.624850 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:39.725936 kubelet[1379]: E1002 20:14:39.725705 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:39.826368 kubelet[1379]: E1002 20:14:39.826296 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:39.927427 kubelet[1379]: E1002 20:14:39.927314 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:39.942923 kubelet[1379]: E1002 20:14:39.942887 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:40.022014 kubelet[1379]: W1002 20:14:40.021845 1379 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:14:40.022014 kubelet[1379]: E1002 20:14:40.021947 1379 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:14:40.027895 kubelet[1379]: E1002 20:14:40.027862 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:40.128395 kubelet[1379]: E1002 20:14:40.128303 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:40.229307 kubelet[1379]: E1002 20:14:40.229162 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:40.330425 kubelet[1379]: E1002 20:14:40.330305 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:40.431525 kubelet[1379]: E1002 20:14:40.431305 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:40.532677 kubelet[1379]: E1002 20:14:40.532537 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:40.633816 kubelet[1379]: E1002 20:14:40.633656 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:40.734702 kubelet[1379]: E1002 20:14:40.734631 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:40.835641 kubelet[1379]: E1002 20:14:40.835549 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:40.935950 kubelet[1379]: E1002 20:14:40.935703 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:40.944209 kubelet[1379]: E1002 20:14:40.944095 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:41.036076 kubelet[1379]: E1002 20:14:41.035908 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:41.136237 kubelet[1379]: E1002 20:14:41.136091 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:41.237258 kubelet[1379]: E1002 20:14:41.236986 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:41.337264 kubelet[1379]: E1002 20:14:41.337207 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:41.437556 kubelet[1379]: E1002 20:14:41.437333 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:41.538439 kubelet[1379]: E1002 20:14:41.538284 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:41.639781 kubelet[1379]: E1002 20:14:41.639723 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:41.740789 kubelet[1379]: E1002 20:14:41.740697 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:41.841733 kubelet[1379]: E1002 20:14:41.841641 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:41.936298 kubelet[1379]: W1002 20:14:41.936213 1379 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "172.24.4.201" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:14:41.936693 kubelet[1379]: E1002 20:14:41.936657 1379 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.24.4.201" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:14:41.942525 kubelet[1379]: E1002 20:14:41.942442 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:41.944808 kubelet[1379]: E1002 20:14:41.944775 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:42.043297 kubelet[1379]: E1002 20:14:42.043208 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:42.143648 kubelet[1379]: E1002 20:14:42.143451 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:42.245046 kubelet[1379]: E1002 20:14:42.244959 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:42.294505 kubelet[1379]: W1002 20:14:42.294458 1379 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:14:42.294850 kubelet[1379]: E1002 20:14:42.294823 1379 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:14:42.345769 kubelet[1379]: E1002 20:14:42.345701 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:42.446433 kubelet[1379]: E1002 20:14:42.446196 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:42.547252 kubelet[1379]: E1002 20:14:42.547168 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:42.607952 kubelet[1379]: W1002 20:14:42.607900 1379 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:14:42.608332 kubelet[1379]: E1002 20:14:42.608307 1379 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:14:42.648423 kubelet[1379]: E1002 20:14:42.648370 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:42.748841 kubelet[1379]: E1002 20:14:42.748672 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:42.849782 kubelet[1379]: E1002 20:14:42.849720 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:42.931597 kubelet[1379]: I1002 20:14:42.931495 1379 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 20:14:42.945267 kubelet[1379]: E1002 20:14:42.945198 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:42.950719 kubelet[1379]: E1002 20:14:42.950654 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:43.051894 kubelet[1379]: E1002 20:14:43.051666 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:43.070773 kubelet[1379]: E1002 20:14:43.070744 1379 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.201\" not found" Oct 2 20:14:43.071364 kubelet[1379]: E1002 20:14:43.071311 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:14:43.152372 kubelet[1379]: E1002 20:14:43.152292 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:43.252788 kubelet[1379]: E1002 20:14:43.252723 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:43.348215 kubelet[1379]: E1002 20:14:43.348143 1379 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.24.4.201" not found Oct 2 20:14:43.353291 kubelet[1379]: E1002 20:14:43.353223 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:43.453955 kubelet[1379]: E1002 20:14:43.453903 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:43.554946 kubelet[1379]: E1002 20:14:43.554841 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:43.655766 kubelet[1379]: E1002 20:14:43.655505 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:43.757151 kubelet[1379]: E1002 20:14:43.757093 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:43.858404 kubelet[1379]: E1002 20:14:43.858266 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:43.945950 kubelet[1379]: E1002 20:14:43.945761 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:43.958845 kubelet[1379]: E1002 20:14:43.958746 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:44.059004 kubelet[1379]: E1002 20:14:44.058956 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:44.159739 kubelet[1379]: E1002 20:14:44.159649 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:44.260261 kubelet[1379]: E1002 20:14:44.260093 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:44.361376 kubelet[1379]: E1002 20:14:44.361293 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:44.374620 kubelet[1379]: E1002 20:14:44.374513 1379 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.24.4.201" not found Oct 2 20:14:44.462333 kubelet[1379]: E1002 20:14:44.462221 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:44.563378 kubelet[1379]: E1002 20:14:44.563208 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:44.664032 kubelet[1379]: E1002 20:14:44.663937 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:44.765263 kubelet[1379]: E1002 20:14:44.765207 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:44.866383 kubelet[1379]: E1002 20:14:44.866304 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:44.946624 kubelet[1379]: E1002 20:14:44.946526 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:44.967230 kubelet[1379]: E1002 20:14:44.967167 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:45.067692 kubelet[1379]: E1002 20:14:45.067645 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:45.168662 kubelet[1379]: E1002 20:14:45.167920 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:45.269640 kubelet[1379]: E1002 20:14:45.269538 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:45.370245 kubelet[1379]: E1002 20:14:45.370205 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:45.471257 kubelet[1379]: E1002 20:14:45.470548 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:45.572367 kubelet[1379]: E1002 20:14:45.572272 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:45.622778 kubelet[1379]: E1002 20:14:45.622732 1379 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.24.4.201\" not found" node="172.24.4.201" Oct 2 20:14:45.672987 kubelet[1379]: E1002 20:14:45.672904 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:45.698688 kubelet[1379]: I1002 20:14:45.698652 1379 kubelet_node_status.go:70] "Attempting to register node" node="172.24.4.201" Oct 2 20:14:45.774235 kubelet[1379]: E1002 20:14:45.773361 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:45.777083 kubelet[1379]: I1002 20:14:45.777047 1379 kubelet_node_status.go:73] "Successfully registered node" node="172.24.4.201" Oct 2 20:14:45.875690 kubelet[1379]: E1002 20:14:45.875551 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:45.897456 sudo[1191]: pam_unix(sudo:session): session closed for user root Oct 2 20:14:45.896000 audit[1191]: USER_END pid=1191 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:14:45.900776 kernel: kauditd_printk_skb: 101 callbacks suppressed Oct 2 20:14:45.900901 kernel: audit: type=1106 audit(1696277685.896:577): pid=1191 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:14:45.897000 audit[1191]: CRED_DISP pid=1191 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:14:45.920283 kernel: audit: type=1104 audit(1696277685.897:578): pid=1191 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:14:45.947323 kubelet[1379]: E1002 20:14:45.947169 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:45.976403 kubelet[1379]: E1002 20:14:45.976284 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:46.064382 sshd[1187]: pam_unix(sshd:session): session closed for user core Oct 2 20:14:46.066000 audit[1187]: USER_END pid=1187 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:14:46.080708 kernel: audit: type=1106 audit(1696277686.066:579): pid=1187 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:14:46.066000 audit[1187]: CRED_DISP pid=1187 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:14:46.082650 kubelet[1379]: E1002 20:14:46.082263 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:46.092446 systemd[1]: sshd@6-172.24.4.201:22-172.24.4.1:55476.service: Deactivated successfully. Oct 2 20:14:46.093033 kernel: audit: type=1104 audit(1696277686.066:580): pid=1187 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=172.24.4.1 addr=172.24.4.1 terminal=ssh res=success' Oct 2 20:14:46.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.201:22-172.24.4.1:55476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:46.094326 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 20:14:46.103627 kernel: audit: type=1131 audit(1696277686.092:581): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.24.4.201:22-172.24.4.1:55476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:14:46.104518 systemd-logind[1042]: Session 7 logged out. Waiting for processes to exit. Oct 2 20:14:46.107203 systemd-logind[1042]: Removed session 7. Oct 2 20:14:46.182701 kubelet[1379]: E1002 20:14:46.182659 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:46.283500 kubelet[1379]: E1002 20:14:46.283441 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:46.384390 kubelet[1379]: E1002 20:14:46.384291 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:46.484608 kubelet[1379]: E1002 20:14:46.484511 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:46.585761 kubelet[1379]: E1002 20:14:46.585672 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:46.686877 kubelet[1379]: E1002 20:14:46.686662 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:46.787543 kubelet[1379]: E1002 20:14:46.787472 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:46.888394 kubelet[1379]: E1002 20:14:46.888315 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:46.947725 kubelet[1379]: E1002 20:14:46.947507 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:46.989501 kubelet[1379]: E1002 20:14:46.989426 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:47.090130 kubelet[1379]: E1002 20:14:47.090012 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:47.191167 kubelet[1379]: E1002 20:14:47.191037 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:47.291643 kubelet[1379]: E1002 20:14:47.291452 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:47.392879 kubelet[1379]: E1002 20:14:47.392815 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:47.493074 kubelet[1379]: E1002 20:14:47.493000 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:47.594099 kubelet[1379]: E1002 20:14:47.594033 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:47.695068 kubelet[1379]: E1002 20:14:47.695006 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:47.796175 kubelet[1379]: E1002 20:14:47.796096 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:47.897275 kubelet[1379]: E1002 20:14:47.897089 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:47.947938 kubelet[1379]: E1002 20:14:47.947892 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:47.997965 kubelet[1379]: E1002 20:14:47.997813 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:48.072767 kubelet[1379]: E1002 20:14:48.072708 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:14:48.098351 kubelet[1379]: E1002 20:14:48.098309 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:48.199540 kubelet[1379]: E1002 20:14:48.199370 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:48.299971 kubelet[1379]: E1002 20:14:48.299900 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:48.400985 kubelet[1379]: E1002 20:14:48.400859 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:48.502169 kubelet[1379]: E1002 20:14:48.501921 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:48.602900 kubelet[1379]: E1002 20:14:48.602774 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:48.703887 kubelet[1379]: E1002 20:14:48.703756 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:48.804932 kubelet[1379]: E1002 20:14:48.804768 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:48.905737 kubelet[1379]: E1002 20:14:48.905646 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:48.948653 kubelet[1379]: E1002 20:14:48.948531 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:49.006804 kubelet[1379]: E1002 20:14:49.006683 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:49.106920 kubelet[1379]: E1002 20:14:49.106769 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:49.207843 kubelet[1379]: E1002 20:14:49.207691 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:49.308287 kubelet[1379]: E1002 20:14:49.308118 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:49.409262 kubelet[1379]: E1002 20:14:49.409026 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:49.510253 kubelet[1379]: E1002 20:14:49.510207 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:49.611326 kubelet[1379]: E1002 20:14:49.611236 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:49.712419 kubelet[1379]: E1002 20:14:49.712177 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:49.813517 kubelet[1379]: E1002 20:14:49.813253 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:49.914367 kubelet[1379]: E1002 20:14:49.914284 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:49.949280 kubelet[1379]: E1002 20:14:49.949161 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:50.015334 kubelet[1379]: E1002 20:14:50.015164 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:50.116456 kubelet[1379]: E1002 20:14:50.116332 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:50.217233 kubelet[1379]: E1002 20:14:50.217158 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:50.317492 kubelet[1379]: E1002 20:14:50.317342 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:50.418700 kubelet[1379]: E1002 20:14:50.418666 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:50.519855 kubelet[1379]: E1002 20:14:50.519820 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:50.621415 kubelet[1379]: E1002 20:14:50.621294 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:50.722382 kubelet[1379]: E1002 20:14:50.722342 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:50.822703 kubelet[1379]: E1002 20:14:50.822670 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:50.923799 kubelet[1379]: E1002 20:14:50.923659 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:50.949709 kubelet[1379]: E1002 20:14:50.949676 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:51.024791 kubelet[1379]: E1002 20:14:51.024755 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:51.126007 kubelet[1379]: E1002 20:14:51.125971 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:51.227086 kubelet[1379]: E1002 20:14:51.226957 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:51.327906 kubelet[1379]: E1002 20:14:51.327785 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:51.428968 kubelet[1379]: E1002 20:14:51.428870 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:51.529532 kubelet[1379]: E1002 20:14:51.529148 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:51.631176 kubelet[1379]: E1002 20:14:51.631136 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:51.732521 kubelet[1379]: E1002 20:14:51.732444 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:51.833654 kubelet[1379]: E1002 20:14:51.833606 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:51.934978 kubelet[1379]: E1002 20:14:51.934906 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:51.950738 kubelet[1379]: E1002 20:14:51.950701 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:52.035760 kubelet[1379]: E1002 20:14:52.035713 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:52.136015 kubelet[1379]: E1002 20:14:52.135878 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:52.237186 kubelet[1379]: E1002 20:14:52.237115 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:52.337905 kubelet[1379]: E1002 20:14:52.337831 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:52.438917 kubelet[1379]: E1002 20:14:52.438750 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:52.539853 kubelet[1379]: E1002 20:14:52.539773 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:52.640862 kubelet[1379]: E1002 20:14:52.640816 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:52.742064 kubelet[1379]: E1002 20:14:52.741885 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:52.842868 kubelet[1379]: E1002 20:14:52.842833 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:52.938457 kubelet[1379]: E1002 20:14:52.938334 1379 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:52.943760 kubelet[1379]: E1002 20:14:52.943690 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:52.951349 kubelet[1379]: E1002 20:14:52.951230 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:53.044482 kubelet[1379]: E1002 20:14:53.044342 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:53.073208 kubelet[1379]: E1002 20:14:53.072896 1379 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.24.4.201\" not found" Oct 2 20:14:53.073956 kubelet[1379]: E1002 20:14:53.073872 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:14:53.145104 kubelet[1379]: E1002 20:14:53.144868 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:53.245103 kubelet[1379]: E1002 20:14:53.245005 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:53.345604 kubelet[1379]: E1002 20:14:53.345489 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:53.446708 kubelet[1379]: E1002 20:14:53.446661 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:53.546924 kubelet[1379]: E1002 20:14:53.546814 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:53.650009 kubelet[1379]: E1002 20:14:53.648293 1379 kubelet.go:2448] "Error getting node" err="node \"172.24.4.201\" not found" Oct 2 20:14:53.749129 kubelet[1379]: I1002 20:14:53.748982 1379 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 20:14:53.750926 env[1055]: time="2023-10-02T20:14:53.749825041Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 20:14:53.751848 kubelet[1379]: I1002 20:14:53.750420 1379 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 20:14:53.751848 kubelet[1379]: E1002 20:14:53.751029 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:14:53.951937 kubelet[1379]: I1002 20:14:53.951184 1379 apiserver.go:52] "Watching apiserver" Oct 2 20:14:53.951937 kubelet[1379]: E1002 20:14:53.951792 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:53.957050 kubelet[1379]: I1002 20:14:53.957004 1379 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:14:53.957362 kubelet[1379]: I1002 20:14:53.957332 1379 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:14:53.973212 systemd[1]: Created slice kubepods-besteffort-podd4454edb_a443_4b4f_92e6_bf9ed8302a90.slice. Oct 2 20:14:53.992243 systemd[1]: Created slice kubepods-burstable-pod4f4edfc2_67cc_4cfe_9338_b99187e9c818.slice. Oct 2 20:14:54.000876 kubelet[1379]: I1002 20:14:54.000825 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-host-proc-sys-net\") pod \"cilium-s8fg6\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " pod="kube-system/cilium-s8fg6" Oct 2 20:14:54.001067 kubelet[1379]: I1002 20:14:54.000950 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4454edb-a443-4b4f-92e6-bf9ed8302a90-lib-modules\") pod \"kube-proxy-c2qtj\" (UID: \"d4454edb-a443-4b4f-92e6-bf9ed8302a90\") " pod="kube-system/kube-proxy-c2qtj" Oct 2 20:14:54.001067 kubelet[1379]: I1002 20:14:54.001024 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-cilium-cgroup\") pod \"cilium-s8fg6\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " pod="kube-system/cilium-s8fg6" Oct 2 20:14:54.001207 kubelet[1379]: I1002 20:14:54.001091 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-cni-path\") pod \"cilium-s8fg6\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " pod="kube-system/cilium-s8fg6" Oct 2 20:14:54.001207 kubelet[1379]: I1002 20:14:54.001159 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-etc-cni-netd\") pod \"cilium-s8fg6\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " pod="kube-system/cilium-s8fg6" Oct 2 20:14:54.001336 kubelet[1379]: I1002 20:14:54.001238 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-lib-modules\") pod \"cilium-s8fg6\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " pod="kube-system/cilium-s8fg6" Oct 2 20:14:54.001336 kubelet[1379]: I1002 20:14:54.001300 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f4edfc2-67cc-4cfe-9338-b99187e9c818-hubble-tls\") pod \"cilium-s8fg6\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " pod="kube-system/cilium-s8fg6" Oct 2 20:14:54.001474 kubelet[1379]: I1002 20:14:54.001364 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzlxn\" (UniqueName: \"kubernetes.io/projected/4f4edfc2-67cc-4cfe-9338-b99187e9c818-kube-api-access-jzlxn\") pod \"cilium-s8fg6\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " pod="kube-system/cilium-s8fg6" Oct 2 20:14:54.001474 kubelet[1379]: I1002 20:14:54.001431 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d4454edb-a443-4b4f-92e6-bf9ed8302a90-kube-proxy\") pod \"kube-proxy-c2qtj\" (UID: \"d4454edb-a443-4b4f-92e6-bf9ed8302a90\") " pod="kube-system/kube-proxy-c2qtj" Oct 2 20:14:54.001664 kubelet[1379]: I1002 20:14:54.001490 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-hostproc\") pod \"cilium-s8fg6\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " pod="kube-system/cilium-s8fg6" Oct 2 20:14:54.001664 kubelet[1379]: I1002 20:14:54.001552 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-xtables-lock\") pod \"cilium-s8fg6\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " pod="kube-system/cilium-s8fg6" Oct 2 20:14:54.001826 kubelet[1379]: I1002 20:14:54.001667 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f4edfc2-67cc-4cfe-9338-b99187e9c818-cilium-config-path\") pod \"cilium-s8fg6\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " pod="kube-system/cilium-s8fg6" Oct 2 20:14:54.001826 kubelet[1379]: I1002 20:14:54.001760 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-host-proc-sys-kernel\") pod \"cilium-s8fg6\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " pod="kube-system/cilium-s8fg6" Oct 2 20:14:54.001826 kubelet[1379]: I1002 20:14:54.001824 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4454edb-a443-4b4f-92e6-bf9ed8302a90-xtables-lock\") pod \"kube-proxy-c2qtj\" (UID: \"d4454edb-a443-4b4f-92e6-bf9ed8302a90\") " pod="kube-system/kube-proxy-c2qtj" Oct 2 20:14:54.002048 kubelet[1379]: I1002 20:14:54.001890 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75jhm\" (UniqueName: \"kubernetes.io/projected/d4454edb-a443-4b4f-92e6-bf9ed8302a90-kube-api-access-75jhm\") pod \"kube-proxy-c2qtj\" (UID: \"d4454edb-a443-4b4f-92e6-bf9ed8302a90\") " pod="kube-system/kube-proxy-c2qtj" Oct 2 20:14:54.002048 kubelet[1379]: I1002 20:14:54.001950 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-bpf-maps\") pod \"cilium-s8fg6\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " pod="kube-system/cilium-s8fg6" Oct 2 20:14:54.002188 kubelet[1379]: I1002 20:14:54.002068 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-cilium-run\") pod \"cilium-s8fg6\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " pod="kube-system/cilium-s8fg6" Oct 2 20:14:54.002188 kubelet[1379]: I1002 20:14:54.002140 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f4edfc2-67cc-4cfe-9338-b99187e9c818-clustermesh-secrets\") pod \"cilium-s8fg6\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " pod="kube-system/cilium-s8fg6" Oct 2 20:14:54.002188 kubelet[1379]: I1002 20:14:54.002162 1379 reconciler.go:169] "Reconciler: start to sync state" Oct 2 20:14:54.305900 env[1055]: time="2023-10-02T20:14:54.305661870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s8fg6,Uid:4f4edfc2-67cc-4cfe-9338-b99187e9c818,Namespace:kube-system,Attempt:0,}" Oct 2 20:14:54.586685 env[1055]: time="2023-10-02T20:14:54.586543862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c2qtj,Uid:d4454edb-a443-4b4f-92e6-bf9ed8302a90,Namespace:kube-system,Attempt:0,}" Oct 2 20:14:54.953314 kubelet[1379]: E1002 20:14:54.953092 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:55.102152 env[1055]: time="2023-10-02T20:14:55.102074317Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:14:55.105914 env[1055]: time="2023-10-02T20:14:55.105861202Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:14:55.109155 env[1055]: time="2023-10-02T20:14:55.109103432Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:14:55.122306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4096748290.mount: Deactivated successfully. Oct 2 20:14:55.130219 env[1055]: time="2023-10-02T20:14:55.130122829Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:14:55.132296 env[1055]: time="2023-10-02T20:14:55.132223978Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:14:55.138489 env[1055]: time="2023-10-02T20:14:55.138425364Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:14:55.144370 env[1055]: time="2023-10-02T20:14:55.144316574Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:14:55.146444 env[1055]: time="2023-10-02T20:14:55.146392255Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:14:55.188509 env[1055]: time="2023-10-02T20:14:55.188328917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:14:55.188509 env[1055]: time="2023-10-02T20:14:55.188374113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:14:55.188509 env[1055]: time="2023-10-02T20:14:55.188387277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:14:55.189639 env[1055]: time="2023-10-02T20:14:55.188803821Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3abc3d697ecbc06eb3b4e2530523c6177fac3b2045a297454ea656cc24bedd6f pid=1472 runtime=io.containerd.runc.v2 Oct 2 20:14:55.206422 env[1055]: time="2023-10-02T20:14:55.206100614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:14:55.206422 env[1055]: time="2023-10-02T20:14:55.206174865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:14:55.206422 env[1055]: time="2023-10-02T20:14:55.206190584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:14:55.206819 env[1055]: time="2023-10-02T20:14:55.206763534Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25 pid=1490 runtime=io.containerd.runc.v2 Oct 2 20:14:55.224137 systemd[1]: Started cri-containerd-3abc3d697ecbc06eb3b4e2530523c6177fac3b2045a297454ea656cc24bedd6f.scope. Oct 2 20:14:55.241935 systemd[1]: Started cri-containerd-77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25.scope. Oct 2 20:14:55.254000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259625 kernel: audit: type=1400 audit(1696277695.254:582): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.254000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.254000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.266935 kernel: audit: type=1400 audit(1696277695.254:583): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.267065 kernel: audit: type=1400 audit(1696277695.254:584): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.276658 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 20:14:55.276716 kernel: audit: type=1400 audit(1696277695.254:585): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.276744 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 20:14:55.276766 kernel: audit: type=1400 audit(1696277695.254:586): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.276793 kernel: audit: backlog limit exceeded Oct 2 20:14:55.254000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.254000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.278461 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 20:14:55.278509 kernel: audit: type=1400 audit(1696277695.254:587): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.254000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.254000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.254000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.254000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.255000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.255000 audit: BPF prog-id=67 op=LOAD Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { bpf } for pid=1491 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=1472 pid=1491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:55.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361626333643639376563626330366562336234653235333035323363 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { perfmon } for pid=1491 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=1472 pid=1491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:55.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361626333643639376563626330366562336234653235333035323363 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { bpf } for pid=1491 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { bpf } for pid=1491 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { bpf } for pid=1491 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { perfmon } for pid=1491 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { perfmon } for pid=1491 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { perfmon } for pid=1491 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { perfmon } for pid=1491 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { perfmon } for pid=1491 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { bpf } for pid=1491 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { bpf } for pid=1491 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit: BPF prog-id=68 op=LOAD Oct 2 20:14:55.259000 audit[1491]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c0003b66f0 items=0 ppid=1472 pid=1491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:55.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361626333643639376563626330366562336234653235333035323363 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { bpf } for pid=1491 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { bpf } for pid=1491 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { perfmon } for pid=1491 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { perfmon } for pid=1491 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { perfmon } for pid=1491 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { perfmon } for pid=1491 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { perfmon } for pid=1491 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { bpf } for pid=1491 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { bpf } for pid=1491 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit: BPF prog-id=69 op=LOAD Oct 2 20:14:55.259000 audit[1491]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c0003b6738 items=0 ppid=1472 pid=1491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:55.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361626333643639376563626330366562336234653235333035323363 Oct 2 20:14:55.259000 audit: BPF prog-id=69 op=UNLOAD Oct 2 20:14:55.259000 audit: BPF prog-id=68 op=UNLOAD Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { bpf } for pid=1491 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { bpf } for pid=1491 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { bpf } for pid=1491 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { perfmon } for pid=1491 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { perfmon } for pid=1491 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { perfmon } for pid=1491 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { perfmon } for pid=1491 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { perfmon } for pid=1491 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { bpf } for pid=1491 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit[1491]: AVC avc: denied { bpf } for pid=1491 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.259000 audit: BPF prog-id=70 op=LOAD Oct 2 20:14:55.259000 audit[1491]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c0003b6b48 items=0 ppid=1472 pid=1491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:55.259000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361626333643639376563626330366562336234653235333035323363 Oct 2 20:14:55.267000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.267000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.267000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.267000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.267000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.267000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.267000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.273000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.276000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.281000 audit[1505]: AVC avc: denied { bpf } for pid=1505 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.281000 audit[1505]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1490 pid=1505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:55.281000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737643132373131383538393339623465373536646234336337393966 Oct 2 20:14:55.283000 audit[1505]: AVC avc: denied { perfmon } for pid=1505 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.283000 audit[1505]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=1490 pid=1505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:55.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737643132373131383538393339623465373536646234336337393966 Oct 2 20:14:55.283000 audit[1505]: AVC avc: denied { bpf } for pid=1505 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.283000 audit[1505]: AVC avc: denied { bpf } for pid=1505 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.283000 audit[1505]: AVC avc: denied { bpf } for pid=1505 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.283000 audit[1505]: AVC avc: denied { perfmon } for pid=1505 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.283000 audit[1505]: AVC avc: denied { perfmon } for pid=1505 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.283000 audit[1505]: AVC avc: denied { perfmon } for pid=1505 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.283000 audit[1505]: AVC avc: denied { perfmon } for pid=1505 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.283000 audit[1505]: AVC avc: denied { perfmon } for pid=1505 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.283000 audit[1505]: AVC avc: denied { bpf } for pid=1505 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.283000 audit[1505]: AVC avc: denied { bpf } for pid=1505 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.283000 audit: BPF prog-id=72 op=LOAD Oct 2 20:14:55.283000 audit[1505]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c00029c610 items=0 ppid=1490 pid=1505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:55.283000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737643132373131383538393339623465373536646234336337393966 Oct 2 20:14:55.284000 audit[1505]: AVC avc: denied { bpf } for pid=1505 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.284000 audit[1505]: AVC avc: denied { bpf } for pid=1505 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.284000 audit[1505]: AVC avc: denied { perfmon } for pid=1505 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.284000 audit[1505]: AVC avc: denied { perfmon } for pid=1505 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.284000 audit[1505]: AVC avc: denied { perfmon } for pid=1505 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.284000 audit[1505]: AVC avc: denied { perfmon } for pid=1505 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.284000 audit[1505]: AVC avc: denied { perfmon } for pid=1505 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.284000 audit[1505]: AVC avc: denied { bpf } for pid=1505 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.284000 audit[1505]: AVC avc: denied { bpf } for pid=1505 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.284000 audit: BPF prog-id=73 op=LOAD Oct 2 20:14:55.284000 audit[1505]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c00029c658 items=0 ppid=1490 pid=1505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:55.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737643132373131383538393339623465373536646234336337393966 Oct 2 20:14:55.284000 audit: BPF prog-id=73 op=UNLOAD Oct 2 20:14:55.284000 audit: BPF prog-id=72 op=UNLOAD Oct 2 20:14:55.284000 audit[1505]: AVC avc: denied { bpf } for pid=1505 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.284000 audit[1505]: AVC avc: denied { bpf } for pid=1505 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.284000 audit[1505]: AVC avc: denied { bpf } for pid=1505 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.284000 audit[1505]: AVC avc: denied { perfmon } for pid=1505 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.284000 audit[1505]: AVC avc: denied { perfmon } for pid=1505 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.284000 audit[1505]: AVC avc: denied { perfmon } for pid=1505 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.284000 audit[1505]: AVC avc: denied { perfmon } for pid=1505 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.284000 audit[1505]: AVC avc: denied { perfmon } for pid=1505 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.284000 audit[1505]: AVC avc: denied { bpf } for pid=1505 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.284000 audit[1505]: AVC avc: denied { bpf } for pid=1505 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:55.284000 audit: BPF prog-id=74 op=LOAD Oct 2 20:14:55.284000 audit[1505]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c00029ca68 items=0 ppid=1490 pid=1505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:55.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737643132373131383538393339623465373536646234336337393966 Oct 2 20:14:55.305481 env[1055]: time="2023-10-02T20:14:55.305419773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c2qtj,Uid:d4454edb-a443-4b4f-92e6-bf9ed8302a90,Namespace:kube-system,Attempt:0,} returns sandbox id \"3abc3d697ecbc06eb3b4e2530523c6177fac3b2045a297454ea656cc24bedd6f\"" Oct 2 20:14:55.308040 env[1055]: time="2023-10-02T20:14:55.308002800Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\"" Oct 2 20:14:55.308511 env[1055]: time="2023-10-02T20:14:55.308473036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s8fg6,Uid:4f4edfc2-67cc-4cfe-9338-b99187e9c818,Namespace:kube-system,Attempt:0,} returns sandbox id \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\"" Oct 2 20:14:55.954304 kubelet[1379]: E1002 20:14:55.954180 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:56.634152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3336993369.mount: Deactivated successfully. Oct 2 20:14:56.954945 kubelet[1379]: E1002 20:14:56.954708 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:57.272359 env[1055]: time="2023-10-02T20:14:57.272174417Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:14:57.274866 env[1055]: time="2023-10-02T20:14:57.274806465Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:14:57.277895 env[1055]: time="2023-10-02T20:14:57.277845047Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:14:57.280744 env[1055]: time="2023-10-02T20:14:57.280694525Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:14:57.281948 env[1055]: time="2023-10-02T20:14:57.281907399Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2\"" Oct 2 20:14:57.284527 env[1055]: time="2023-10-02T20:14:57.284461129Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\"" Oct 2 20:14:57.286355 env[1055]: time="2023-10-02T20:14:57.286301956Z" level=info msg="CreateContainer within sandbox \"3abc3d697ecbc06eb3b4e2530523c6177fac3b2045a297454ea656cc24bedd6f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 20:14:57.302117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569275961.mount: Deactivated successfully. Oct 2 20:14:57.306309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2312229004.mount: Deactivated successfully. Oct 2 20:14:57.317972 env[1055]: time="2023-10-02T20:14:57.317899698Z" level=info msg="CreateContainer within sandbox \"3abc3d697ecbc06eb3b4e2530523c6177fac3b2045a297454ea656cc24bedd6f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5c4f6008871cca1d46aeb5c55b7179636e0864b1fd8c9a447f5f03cab7594958\"" Oct 2 20:14:57.319199 env[1055]: time="2023-10-02T20:14:57.319132120Z" level=info msg="StartContainer for \"5c4f6008871cca1d46aeb5c55b7179636e0864b1fd8c9a447f5f03cab7594958\"" Oct 2 20:14:57.346917 systemd[1]: Started cri-containerd-5c4f6008871cca1d46aeb5c55b7179636e0864b1fd8c9a447f5f03cab7594958.scope. Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { perfmon } for pid=1553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001456b0 a2=3c a3=8 items=0 ppid=1472 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563346636303038383731636361316434366165623563353562373137 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { bpf } for pid=1553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { bpf } for pid=1553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { bpf } for pid=1553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { perfmon } for pid=1553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { perfmon } for pid=1553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { perfmon } for pid=1553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { perfmon } for pid=1553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { perfmon } for pid=1553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { bpf } for pid=1553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { bpf } for pid=1553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit: BPF prog-id=75 op=LOAD Oct 2 20:14:57.380000 audit[1553]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001459d8 a2=78 a3=c0003e83b0 items=0 ppid=1472 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563346636303038383731636361316434366165623563353562373137 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { bpf } for pid=1553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { bpf } for pid=1553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { perfmon } for pid=1553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { perfmon } for pid=1553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { perfmon } for pid=1553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { perfmon } for pid=1553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { perfmon } for pid=1553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { bpf } for pid=1553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { bpf } for pid=1553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit: BPF prog-id=76 op=LOAD Oct 2 20:14:57.380000 audit[1553]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000145770 a2=78 a3=c0003e83f8 items=0 ppid=1472 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563346636303038383731636361316434366165623563353562373137 Oct 2 20:14:57.380000 audit: BPF prog-id=76 op=UNLOAD Oct 2 20:14:57.380000 audit: BPF prog-id=75 op=UNLOAD Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { bpf } for pid=1553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { bpf } for pid=1553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { bpf } for pid=1553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { perfmon } for pid=1553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { perfmon } for pid=1553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { perfmon } for pid=1553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { perfmon } for pid=1553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { perfmon } for pid=1553 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { bpf } for pid=1553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit[1553]: AVC avc: denied { bpf } for pid=1553 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:14:57.380000 audit: BPF prog-id=77 op=LOAD Oct 2 20:14:57.380000 audit[1553]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000145c30 a2=78 a3=c0003e8488 items=0 ppid=1472 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563346636303038383731636361316434366165623563353562373137 Oct 2 20:14:57.403461 env[1055]: time="2023-10-02T20:14:57.403422196Z" level=info msg="StartContainer for \"5c4f6008871cca1d46aeb5c55b7179636e0864b1fd8c9a447f5f03cab7594958\" returns successfully" Oct 2 20:14:57.453052 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Oct 2 20:14:57.453208 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes) Oct 2 20:14:57.453939 kernel: IPVS: ipvs loaded. Oct 2 20:14:57.479690 kernel: IPVS: [rr] scheduler registered. Oct 2 20:14:57.490609 kernel: IPVS: [wrr] scheduler registered. Oct 2 20:14:57.498599 kernel: IPVS: [sh] scheduler registered. Oct 2 20:14:57.577000 audit[1614]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1614 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:57.577000 audit[1614]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd787a4e90 a2=0 a3=7ffd787a4e7c items=0 ppid=1567 pid=1614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.577000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 20:14:57.580000 audit[1615]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_chain pid=1615 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:57.580000 audit[1615]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc736855a0 a2=0 a3=7ffc7368558c items=0 ppid=1567 pid=1615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.580000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 20:14:57.581000 audit[1616]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=1616 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:57.581000 audit[1616]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff748b51e0 a2=0 a3=7fff748b51cc items=0 ppid=1567 pid=1616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.581000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 20:14:57.582000 audit[1617]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=1617 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:57.582000 audit[1617]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff13070ff0 a2=0 a3=7fff13070fdc items=0 ppid=1567 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.582000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 20:14:57.583000 audit[1618]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=1618 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:57.583000 audit[1618]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5c1897e0 a2=0 a3=7fff5c1897cc items=0 ppid=1567 pid=1618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.583000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 20:14:57.584000 audit[1619]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=1619 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:57.584000 audit[1619]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd18953800 a2=0 a3=7ffd189537ec items=0 ppid=1567 pid=1619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.584000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 20:14:57.683000 audit[1620]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1620 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:57.683000 audit[1620]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffeca6aa210 a2=0 a3=7ffeca6aa1fc items=0 ppid=1567 pid=1620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.683000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 20:14:57.690000 audit[1622]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1622 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:57.690000 audit[1622]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc5f8d44a0 a2=0 a3=7ffc5f8d448c items=0 ppid=1567 pid=1622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.690000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 20:14:57.701000 audit[1625]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1625 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:57.701000 audit[1625]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd894b4590 a2=0 a3=7ffd894b457c items=0 ppid=1567 pid=1625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.701000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 20:14:57.704000 audit[1626]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1626 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:57.704000 audit[1626]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8692e130 a2=0 a3=7ffe8692e11c items=0 ppid=1567 pid=1626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.704000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 20:14:57.710000 audit[1628]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1628 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:57.710000 audit[1628]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff5a4d25b0 a2=0 a3=7fff5a4d259c items=0 ppid=1567 pid=1628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.710000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 20:14:57.715000 audit[1629]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1629 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:57.715000 audit[1629]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3d30e110 a2=0 a3=7ffe3d30e0fc items=0 ppid=1567 pid=1629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.715000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 20:14:57.721000 audit[1631]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1631 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:57.721000 audit[1631]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcf1187b20 a2=0 a3=7ffcf1187b0c items=0 ppid=1567 pid=1631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.721000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 20:14:57.730000 audit[1634]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1634 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:57.730000 audit[1634]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff7fd32370 a2=0 a3=7fff7fd3235c items=0 ppid=1567 pid=1634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.730000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 20:14:57.734000 audit[1635]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1635 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:57.734000 audit[1635]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc32acc060 a2=0 a3=7ffc32acc04c items=0 ppid=1567 pid=1635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.734000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 20:14:57.739000 audit[1637]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1637 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:57.739000 audit[1637]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdec711a40 a2=0 a3=7ffdec711a2c items=0 ppid=1567 pid=1637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.739000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 20:14:57.742000 audit[1638]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1638 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:57.742000 audit[1638]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec5185640 a2=0 a3=7ffec518562c items=0 ppid=1567 pid=1638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 20:14:57.748000 audit[1640]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1640 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:57.748000 audit[1640]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffd9e946d0 a2=0 a3=7fffd9e946bc items=0 ppid=1567 pid=1640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.748000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 20:14:57.756000 audit[1643]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1643 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:57.756000 audit[1643]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff09caa950 a2=0 a3=7fff09caa93c items=0 ppid=1567 pid=1643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.756000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 20:14:57.766000 audit[1646]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1646 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:57.766000 audit[1646]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffc1a5bc30 a2=0 a3=7fffc1a5bc1c items=0 ppid=1567 pid=1646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.766000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 20:14:57.768000 audit[1647]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1647 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:57.768000 audit[1647]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcdc2d02b0 a2=0 a3=7ffcdc2d029c items=0 ppid=1567 pid=1647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.768000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 20:14:57.773000 audit[1649]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1649 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:57.773000 audit[1649]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd3053a2e0 a2=0 a3=7ffd3053a2cc items=0 ppid=1567 pid=1649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.773000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:14:57.782000 audit[1652]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1652 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:14:57.782000 audit[1652]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffebb98fca0 a2=0 a3=7ffebb98fc8c items=0 ppid=1567 pid=1652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.782000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:14:57.812000 audit[1656]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=1656 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 20:14:57.812000 audit[1656]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7fff78e0bcb0 a2=0 a3=7fff78e0bc9c items=0 ppid=1567 pid=1656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.812000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:14:57.830000 audit[1656]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=1656 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 20:14:57.830000 audit[1656]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fff78e0bcb0 a2=0 a3=7fff78e0bc9c items=0 ppid=1567 pid=1656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.830000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:14:57.836000 audit[1660]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1660 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:57.836000 audit[1660]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcfc7ea430 a2=0 a3=7ffcfc7ea41c items=0 ppid=1567 pid=1660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.836000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 20:14:57.840000 audit[1662]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=1662 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:57.840000 audit[1662]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffeda020a40 a2=0 a3=7ffeda020a2c items=0 ppid=1567 pid=1662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.840000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 20:14:57.847000 audit[1665]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=1665 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:57.847000 audit[1665]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff012a7e60 a2=0 a3=7fff012a7e4c items=0 ppid=1567 pid=1665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.847000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 20:14:57.848000 audit[1666]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=1666 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:57.848000 audit[1666]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd42d37bf0 a2=0 a3=7ffd42d37bdc items=0 ppid=1567 pid=1666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.848000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 20:14:57.851000 audit[1668]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=1668 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:57.851000 audit[1668]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd96745490 a2=0 a3=7ffd9674547c items=0 ppid=1567 pid=1668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.851000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 20:14:57.853000 audit[1669]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1669 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:57.853000 audit[1669]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff46595250 a2=0 a3=7fff4659523c items=0 ppid=1567 pid=1669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.853000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 20:14:57.856000 audit[1671]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1671 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:57.856000 audit[1671]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeb66f0130 a2=0 a3=7ffeb66f011c items=0 ppid=1567 pid=1671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.856000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 20:14:57.863000 audit[1674]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=1674 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:57.863000 audit[1674]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe0443fed0 a2=0 a3=7ffe0443febc items=0 ppid=1567 pid=1674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.863000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 20:14:57.864000 audit[1675]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=1675 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:57.864000 audit[1675]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1b695390 a2=0 a3=7ffd1b69537c items=0 ppid=1567 pid=1675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.864000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 20:14:57.867000 audit[1677]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=1677 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:57.867000 audit[1677]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe99eefd90 a2=0 a3=7ffe99eefd7c items=0 ppid=1567 pid=1677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.867000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 20:14:57.868000 audit[1678]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1678 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:57.868000 audit[1678]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff9feec390 a2=0 a3=7fff9feec37c items=0 ppid=1567 pid=1678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.868000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 20:14:57.871000 audit[1680]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1680 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:57.871000 audit[1680]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe9de31460 a2=0 a3=7ffe9de3144c items=0 ppid=1567 pid=1680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.871000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 20:14:57.878000 audit[1683]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=1683 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:57.878000 audit[1683]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcf6d1cd20 a2=0 a3=7ffcf6d1cd0c items=0 ppid=1567 pid=1683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.878000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 20:14:57.883000 audit[1686]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1686 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:57.883000 audit[1686]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcf6075260 a2=0 a3=7ffcf607524c items=0 ppid=1567 pid=1686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.883000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 20:14:57.884000 audit[1687]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=1687 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:57.884000 audit[1687]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffda1209390 a2=0 a3=7ffda120937c items=0 ppid=1567 pid=1687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.884000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 20:14:57.887000 audit[1689]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=1689 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:57.887000 audit[1689]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff22d41330 a2=0 a3=7fff22d4131c items=0 ppid=1567 pid=1689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.887000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:14:57.893000 audit[1692]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=1692 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:14:57.893000 audit[1692]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff27770ed0 a2=0 a3=7fff27770ebc items=0 ppid=1567 pid=1692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.893000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:14:57.901000 audit[1696]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=1696 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 20:14:57.901000 audit[1696]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe41406900 a2=0 a3=7ffe414068ec items=0 ppid=1567 pid=1696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.901000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:14:57.901000 audit[1696]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=1696 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 20:14:57.901000 audit[1696]: SYSCALL arch=c000003e syscall=46 success=yes exit=1860 a0=3 a1=7ffe41406900 a2=0 a3=7ffe414068ec items=0 ppid=1567 pid=1696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:14:57.901000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:14:57.956001 kubelet[1379]: E1002 20:14:57.955853 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:58.076984 kubelet[1379]: E1002 20:14:58.076926 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:14:58.811716 update_engine[1043]: I1002 20:14:58.811403 1043 update_attempter.cc:505] Updating boot flags... Oct 2 20:14:58.956144 kubelet[1379]: E1002 20:14:58.956059 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:14:59.965866 kubelet[1379]: E1002 20:14:59.965781 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:00.966456 kubelet[1379]: E1002 20:15:00.966381 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:01.966644 kubelet[1379]: E1002 20:15:01.966504 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:02.966842 kubelet[1379]: E1002 20:15:02.966799 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:03.077307 kubelet[1379]: E1002 20:15:03.077254 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:15:03.967243 kubelet[1379]: E1002 20:15:03.967131 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:04.539744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1117982590.mount: Deactivated successfully. Oct 2 20:15:04.967674 kubelet[1379]: E1002 20:15:04.967611 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:05.968818 kubelet[1379]: E1002 20:15:05.968693 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:06.970029 kubelet[1379]: E1002 20:15:06.969888 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:07.970655 kubelet[1379]: E1002 20:15:07.970586 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:08.078879 kubelet[1379]: E1002 20:15:08.078819 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:15:08.799506 env[1055]: time="2023-10-02T20:15:08.799411231Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:15:08.801820 env[1055]: time="2023-10-02T20:15:08.801759657Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:15:08.804679 env[1055]: time="2023-10-02T20:15:08.804641756Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:15:08.806506 env[1055]: time="2023-10-02T20:15:08.806427905Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\" returns image reference \"sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88\"" Oct 2 20:15:08.809505 env[1055]: time="2023-10-02T20:15:08.809438475Z" level=info msg="CreateContainer within sandbox \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 20:15:08.828990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1563582091.mount: Deactivated successfully. Oct 2 20:15:08.830891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2127203259.mount: Deactivated successfully. Oct 2 20:15:08.839855 env[1055]: time="2023-10-02T20:15:08.839815292Z" level=info msg="CreateContainer within sandbox \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d\"" Oct 2 20:15:08.840833 env[1055]: time="2023-10-02T20:15:08.840811746Z" level=info msg="StartContainer for \"a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d\"" Oct 2 20:15:08.878407 systemd[1]: Started cri-containerd-a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d.scope. Oct 2 20:15:08.906018 systemd[1]: cri-containerd-a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d.scope: Deactivated successfully. Oct 2 20:15:08.971065 kubelet[1379]: E1002 20:15:08.970981 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:09.530422 env[1055]: time="2023-10-02T20:15:09.530273087Z" level=info msg="shim disconnected" id=a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d Oct 2 20:15:09.530422 env[1055]: time="2023-10-02T20:15:09.530406879Z" level=warning msg="cleaning up after shim disconnected" id=a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d namespace=k8s.io Oct 2 20:15:09.530422 env[1055]: time="2023-10-02T20:15:09.530432437Z" level=info msg="cleaning up dead shim" Oct 2 20:15:09.549705 env[1055]: time="2023-10-02T20:15:09.549554248Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:15:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1737 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:15:09Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:15:09.550486 env[1055]: time="2023-10-02T20:15:09.550206674Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 20:15:09.554773 env[1055]: time="2023-10-02T20:15:09.554664204Z" level=error msg="Failed to pipe stdout of container \"a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d\"" error="reading from a closed fifo" Oct 2 20:15:09.554967 env[1055]: time="2023-10-02T20:15:09.554695223Z" level=error msg="Failed to pipe stderr of container \"a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d\"" error="reading from a closed fifo" Oct 2 20:15:09.563734 env[1055]: time="2023-10-02T20:15:09.563531386Z" level=error msg="StartContainer for \"a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:15:09.565061 kubelet[1379]: E1002 20:15:09.564237 1379 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d" Oct 2 20:15:09.565061 kubelet[1379]: E1002 20:15:09.564477 1379 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:15:09.565061 kubelet[1379]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:15:09.565061 kubelet[1379]: rm /hostbin/cilium-mount Oct 2 20:15:09.565499 kubelet[1379]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jzlxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:15:09.565764 kubelet[1379]: E1002 20:15:09.564622 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:15:09.826178 systemd[1]: run-containerd-runc-k8s.io-a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d-runc.QASJqW.mount: Deactivated successfully. Oct 2 20:15:09.826412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d-rootfs.mount: Deactivated successfully. Oct 2 20:15:09.971917 kubelet[1379]: E1002 20:15:09.971792 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:10.359339 env[1055]: time="2023-10-02T20:15:10.359242842Z" level=info msg="CreateContainer within sandbox \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 20:15:10.396726 env[1055]: time="2023-10-02T20:15:10.396550225Z" level=info msg="CreateContainer within sandbox \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b\"" Oct 2 20:15:10.398630 env[1055]: time="2023-10-02T20:15:10.397712591Z" level=info msg="StartContainer for \"e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b\"" Oct 2 20:15:10.452343 systemd[1]: Started cri-containerd-e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b.scope. Oct 2 20:15:10.470615 systemd[1]: cri-containerd-e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b.scope: Deactivated successfully. Oct 2 20:15:10.483772 env[1055]: time="2023-10-02T20:15:10.483677105Z" level=info msg="shim disconnected" id=e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b Oct 2 20:15:10.483772 env[1055]: time="2023-10-02T20:15:10.483774238Z" level=warning msg="cleaning up after shim disconnected" id=e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b namespace=k8s.io Oct 2 20:15:10.483971 env[1055]: time="2023-10-02T20:15:10.483788766Z" level=info msg="cleaning up dead shim" Oct 2 20:15:10.493900 env[1055]: time="2023-10-02T20:15:10.493807369Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:15:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1777 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:15:10Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:15:10.494205 env[1055]: time="2023-10-02T20:15:10.494112533Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 20:15:10.494476 env[1055]: time="2023-10-02T20:15:10.494400835Z" level=error msg="Failed to pipe stderr of container \"e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b\"" error="reading from a closed fifo" Oct 2 20:15:10.494973 env[1055]: time="2023-10-02T20:15:10.494868434Z" level=error msg="Failed to pipe stdout of container \"e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b\"" error="reading from a closed fifo" Oct 2 20:15:10.500497 env[1055]: time="2023-10-02T20:15:10.498758537Z" level=error msg="StartContainer for \"e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:15:10.500797 kubelet[1379]: E1002 20:15:10.499196 1379 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b" Oct 2 20:15:10.500797 kubelet[1379]: E1002 20:15:10.500339 1379 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:15:10.500797 kubelet[1379]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:15:10.500797 kubelet[1379]: rm /hostbin/cilium-mount Oct 2 20:15:10.501222 kubelet[1379]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jzlxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:15:10.501446 kubelet[1379]: E1002 20:15:10.500437 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:15:10.823502 systemd[1]: run-containerd-runc-k8s.io-e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b-runc.xLFRZx.mount: Deactivated successfully. Oct 2 20:15:10.823848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b-rootfs.mount: Deactivated successfully. Oct 2 20:15:10.972363 kubelet[1379]: E1002 20:15:10.972208 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:11.361724 kubelet[1379]: I1002 20:15:11.361677 1379 scope.go:115] "RemoveContainer" containerID="a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d" Oct 2 20:15:11.362922 kubelet[1379]: I1002 20:15:11.362887 1379 scope.go:115] "RemoveContainer" containerID="a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d" Oct 2 20:15:11.366091 env[1055]: time="2023-10-02T20:15:11.366029606Z" level=info msg="RemoveContainer for \"a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d\"" Oct 2 20:15:11.367233 env[1055]: time="2023-10-02T20:15:11.367153599Z" level=info msg="RemoveContainer for \"a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d\"" Oct 2 20:15:11.367508 env[1055]: time="2023-10-02T20:15:11.367424969Z" level=error msg="RemoveContainer for \"a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d\" failed" error="failed to set removing state for container \"a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d\": container is already in removing state" Oct 2 20:15:11.367864 kubelet[1379]: E1002 20:15:11.367834 1379 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d\": container is already in removing state" containerID="a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d" Oct 2 20:15:11.368145 kubelet[1379]: E1002 20:15:11.368117 1379 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d": container is already in removing state; Skipping pod "cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)" Oct 2 20:15:11.369154 kubelet[1379]: E1002 20:15:11.369120 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:15:11.375352 env[1055]: time="2023-10-02T20:15:11.375246912Z" level=info msg="RemoveContainer for \"a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d\" returns successfully" Oct 2 20:15:11.973059 kubelet[1379]: E1002 20:15:11.972998 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:12.371914 kubelet[1379]: E1002 20:15:12.371841 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:15:12.638808 kubelet[1379]: W1002 20:15:12.638499 1379 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f4edfc2_67cc_4cfe_9338_b99187e9c818.slice/cri-containerd-a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d.scope WatchSource:0}: container "a6a169d3bbbf07e982cde1cb4ef0d93e64c94e0fbf7e5825e839e35b8eaa963d" in namespace "k8s.io": not found Oct 2 20:15:12.937662 kubelet[1379]: E1002 20:15:12.937351 1379 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:12.974241 kubelet[1379]: E1002 20:15:12.974119 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:13.080330 kubelet[1379]: E1002 20:15:13.080287 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:15:13.974546 kubelet[1379]: E1002 20:15:13.974360 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:14.975434 kubelet[1379]: E1002 20:15:14.975311 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:15.750685 kubelet[1379]: W1002 20:15:15.750543 1379 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f4edfc2_67cc_4cfe_9338_b99187e9c818.slice/cri-containerd-e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b.scope WatchSource:0}: task e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b not found: not found Oct 2 20:15:15.976060 kubelet[1379]: E1002 20:15:15.975979 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:16.977134 kubelet[1379]: E1002 20:15:16.976979 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:17.977373 kubelet[1379]: E1002 20:15:17.977289 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:18.082719 kubelet[1379]: E1002 20:15:18.082673 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:15:18.978308 kubelet[1379]: E1002 20:15:18.978185 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:19.979541 kubelet[1379]: E1002 20:15:19.979367 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:20.980306 kubelet[1379]: E1002 20:15:20.980226 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:21.981079 kubelet[1379]: E1002 20:15:21.980999 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:22.981892 kubelet[1379]: E1002 20:15:22.981816 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:23.084306 kubelet[1379]: E1002 20:15:23.084191 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:15:23.982221 kubelet[1379]: E1002 20:15:23.982065 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:24.983469 kubelet[1379]: E1002 20:15:24.983330 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:25.248926 env[1055]: time="2023-10-02T20:15:25.248258667Z" level=info msg="CreateContainer within sandbox \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 20:15:25.269709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount858624252.mount: Deactivated successfully. Oct 2 20:15:25.289271 env[1055]: time="2023-10-02T20:15:25.289135281Z" level=info msg="CreateContainer within sandbox \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74\"" Oct 2 20:15:25.290978 env[1055]: time="2023-10-02T20:15:25.290102699Z" level=info msg="StartContainer for \"1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74\"" Oct 2 20:15:25.340289 systemd[1]: Started cri-containerd-1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74.scope. Oct 2 20:15:25.357937 systemd[1]: cri-containerd-1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74.scope: Deactivated successfully. Oct 2 20:15:25.373354 env[1055]: time="2023-10-02T20:15:25.373288250Z" level=info msg="shim disconnected" id=1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74 Oct 2 20:15:25.373503 env[1055]: time="2023-10-02T20:15:25.373362940Z" level=warning msg="cleaning up after shim disconnected" id=1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74 namespace=k8s.io Oct 2 20:15:25.373503 env[1055]: time="2023-10-02T20:15:25.373376185Z" level=info msg="cleaning up dead shim" Oct 2 20:15:25.381777 env[1055]: time="2023-10-02T20:15:25.381724545Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:15:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1816 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:15:25Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:15:25.382107 env[1055]: time="2023-10-02T20:15:25.382036842Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 20:15:25.386671 env[1055]: time="2023-10-02T20:15:25.386620761Z" level=error msg="Failed to pipe stdout of container \"1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74\"" error="reading from a closed fifo" Oct 2 20:15:25.386822 env[1055]: time="2023-10-02T20:15:25.386769882Z" level=error msg="Failed to pipe stderr of container \"1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74\"" error="reading from a closed fifo" Oct 2 20:15:25.390730 env[1055]: time="2023-10-02T20:15:25.390678843Z" level=error msg="StartContainer for \"1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:15:25.391442 kubelet[1379]: E1002 20:15:25.390970 1379 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74" Oct 2 20:15:25.391442 kubelet[1379]: E1002 20:15:25.391372 1379 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:15:25.391442 kubelet[1379]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:15:25.391442 kubelet[1379]: rm /hostbin/cilium-mount Oct 2 20:15:25.391660 kubelet[1379]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jzlxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:15:25.391728 kubelet[1379]: E1002 20:15:25.391412 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:15:25.405194 kubelet[1379]: I1002 20:15:25.404766 1379 scope.go:115] "RemoveContainer" containerID="e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b" Oct 2 20:15:25.405194 kubelet[1379]: I1002 20:15:25.405057 1379 scope.go:115] "RemoveContainer" containerID="e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b" Oct 2 20:15:25.407357 env[1055]: time="2023-10-02T20:15:25.407308237Z" level=info msg="RemoveContainer for \"e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b\"" Oct 2 20:15:25.407625 env[1055]: time="2023-10-02T20:15:25.407592390Z" level=info msg="RemoveContainer for \"e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b\"" Oct 2 20:15:25.407822 env[1055]: time="2023-10-02T20:15:25.407788349Z" level=error msg="RemoveContainer for \"e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b\" failed" error="failed to set removing state for container \"e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b\": container is already in removing state" Oct 2 20:15:25.407990 kubelet[1379]: E1002 20:15:25.407980 1379 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b\": container is already in removing state" containerID="e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b" Oct 2 20:15:25.408091 kubelet[1379]: E1002 20:15:25.408081 1379 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b": container is already in removing state; Skipping pod "cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)" Oct 2 20:15:25.408426 kubelet[1379]: E1002 20:15:25.408415 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:15:25.411450 env[1055]: time="2023-10-02T20:15:25.411368613Z" level=info msg="RemoveContainer for \"e93927a6de10c7cdb4a7faf2b8b0f852369f5cc5a2149b2eff90d5403938ff6b\" returns successfully" Oct 2 20:15:25.983935 kubelet[1379]: E1002 20:15:25.983846 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:26.263240 systemd[1]: run-containerd-runc-k8s.io-1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74-runc.eoGMt3.mount: Deactivated successfully. Oct 2 20:15:26.263480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74-rootfs.mount: Deactivated successfully. Oct 2 20:15:26.985266 kubelet[1379]: E1002 20:15:26.985108 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:27.985774 kubelet[1379]: E1002 20:15:27.985708 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:28.085734 kubelet[1379]: E1002 20:15:28.085699 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:15:28.481736 kubelet[1379]: W1002 20:15:28.481666 1379 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f4edfc2_67cc_4cfe_9338_b99187e9c818.slice/cri-containerd-1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74.scope WatchSource:0}: task 1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74 not found: not found Oct 2 20:15:28.987142 kubelet[1379]: E1002 20:15:28.987081 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:29.988649 kubelet[1379]: E1002 20:15:29.988560 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:30.989871 kubelet[1379]: E1002 20:15:30.989714 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:31.990684 kubelet[1379]: E1002 20:15:31.990521 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:32.938345 kubelet[1379]: E1002 20:15:32.938263 1379 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:32.991656 kubelet[1379]: E1002 20:15:32.991565 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:33.087566 kubelet[1379]: E1002 20:15:33.087526 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:15:33.992615 kubelet[1379]: E1002 20:15:33.992510 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:34.993194 kubelet[1379]: E1002 20:15:34.993130 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:35.993874 kubelet[1379]: E1002 20:15:35.993731 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:36.994657 kubelet[1379]: E1002 20:15:36.994560 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:37.996542 kubelet[1379]: E1002 20:15:37.996465 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:38.089372 kubelet[1379]: E1002 20:15:38.089303 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:15:38.997669 kubelet[1379]: E1002 20:15:38.997595 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:39.998465 kubelet[1379]: E1002 20:15:39.998380 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:40.243395 kubelet[1379]: E1002 20:15:40.243337 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:15:40.999412 kubelet[1379]: E1002 20:15:40.999162 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:41.999814 kubelet[1379]: E1002 20:15:41.999716 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:43.000297 kubelet[1379]: E1002 20:15:43.000230 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:43.091499 kubelet[1379]: E1002 20:15:43.091459 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:15:44.001616 kubelet[1379]: E1002 20:15:44.001475 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:45.001866 kubelet[1379]: E1002 20:15:45.001798 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:46.003841 kubelet[1379]: E1002 20:15:46.003713 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:47.004832 kubelet[1379]: E1002 20:15:47.004783 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:48.006807 kubelet[1379]: E1002 20:15:48.006672 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:48.093123 kubelet[1379]: E1002 20:15:48.093086 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:15:49.007764 kubelet[1379]: E1002 20:15:49.007564 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:50.008235 kubelet[1379]: E1002 20:15:50.008099 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:51.009305 kubelet[1379]: E1002 20:15:51.009234 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:51.250501 env[1055]: time="2023-10-02T20:15:51.249411789Z" level=info msg="CreateContainer within sandbox \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 20:15:51.272138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3094425959.mount: Deactivated successfully. Oct 2 20:15:51.283452 env[1055]: time="2023-10-02T20:15:51.283389709Z" level=info msg="CreateContainer within sandbox \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994\"" Oct 2 20:15:51.284055 env[1055]: time="2023-10-02T20:15:51.284002829Z" level=info msg="StartContainer for \"716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994\"" Oct 2 20:15:51.313906 systemd[1]: Started cri-containerd-716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994.scope. Oct 2 20:15:51.334514 systemd[1]: cri-containerd-716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994.scope: Deactivated successfully. Oct 2 20:15:51.354083 env[1055]: time="2023-10-02T20:15:51.353961080Z" level=info msg="shim disconnected" id=716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994 Oct 2 20:15:51.354354 env[1055]: time="2023-10-02T20:15:51.354103873Z" level=warning msg="cleaning up after shim disconnected" id=716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994 namespace=k8s.io Oct 2 20:15:51.354354 env[1055]: time="2023-10-02T20:15:51.354146963Z" level=info msg="cleaning up dead shim" Oct 2 20:15:51.368317 env[1055]: time="2023-10-02T20:15:51.368206257Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:15:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1860 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:15:51Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:15:51.368817 env[1055]: time="2023-10-02T20:15:51.368701720Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 20:15:51.369087 env[1055]: time="2023-10-02T20:15:51.369023173Z" level=error msg="Failed to pipe stdout of container \"716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994\"" error="reading from a closed fifo" Oct 2 20:15:51.370739 env[1055]: time="2023-10-02T20:15:51.370664688Z" level=error msg="Failed to pipe stderr of container \"716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994\"" error="reading from a closed fifo" Oct 2 20:15:51.375222 env[1055]: time="2023-10-02T20:15:51.375142642Z" level=error msg="StartContainer for \"716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:15:51.375632 kubelet[1379]: E1002 20:15:51.375538 1379 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994" Oct 2 20:15:51.376526 kubelet[1379]: E1002 20:15:51.375912 1379 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:15:51.376526 kubelet[1379]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:15:51.376526 kubelet[1379]: rm /hostbin/cilium-mount Oct 2 20:15:51.376526 kubelet[1379]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jzlxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:15:51.376862 kubelet[1379]: E1002 20:15:51.375989 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:15:51.488796 kubelet[1379]: I1002 20:15:51.488754 1379 scope.go:115] "RemoveContainer" containerID="1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74" Oct 2 20:15:51.489762 kubelet[1379]: I1002 20:15:51.489730 1379 scope.go:115] "RemoveContainer" containerID="1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74" Oct 2 20:15:51.492489 env[1055]: time="2023-10-02T20:15:51.492429718Z" level=info msg="RemoveContainer for \"1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74\"" Oct 2 20:15:51.493173 env[1055]: time="2023-10-02T20:15:51.493123868Z" level=info msg="RemoveContainer for \"1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74\"" Oct 2 20:15:51.493383 env[1055]: time="2023-10-02T20:15:51.493316924Z" level=error msg="RemoveContainer for \"1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74\" failed" error="failed to set removing state for container \"1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74\": container is already in removing state" Oct 2 20:15:51.493808 kubelet[1379]: E1002 20:15:51.493779 1379 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74\": container is already in removing state" containerID="1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74" Oct 2 20:15:51.494125 kubelet[1379]: E1002 20:15:51.494070 1379 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74": container is already in removing state; Skipping pod "cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)" Oct 2 20:15:51.495188 kubelet[1379]: E1002 20:15:51.495156 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:15:51.497547 env[1055]: time="2023-10-02T20:15:51.497482863Z" level=info msg="RemoveContainer for \"1305d333f728361dd7067e53bfba14c2af728c42cab78254b5edef681f8e1f74\" returns successfully" Oct 2 20:15:52.010986 kubelet[1379]: E1002 20:15:52.010856 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:52.266559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994-rootfs.mount: Deactivated successfully. Oct 2 20:15:52.938213 kubelet[1379]: E1002 20:15:52.938096 1379 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:53.011900 kubelet[1379]: E1002 20:15:53.011841 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:53.095242 kubelet[1379]: E1002 20:15:53.095152 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:15:54.013929 kubelet[1379]: E1002 20:15:54.013835 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:54.460496 kubelet[1379]: W1002 20:15:54.460431 1379 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f4edfc2_67cc_4cfe_9338_b99187e9c818.slice/cri-containerd-716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994.scope WatchSource:0}: task 716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994 not found: not found Oct 2 20:15:55.015934 kubelet[1379]: E1002 20:15:55.015816 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:56.016683 kubelet[1379]: E1002 20:15:56.016559 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:57.017366 kubelet[1379]: E1002 20:15:57.017307 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:58.018613 kubelet[1379]: E1002 20:15:58.018493 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:15:58.097359 kubelet[1379]: E1002 20:15:58.097278 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:15:59.019563 kubelet[1379]: E1002 20:15:59.019432 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:00.020664 kubelet[1379]: E1002 20:16:00.020524 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:01.021525 kubelet[1379]: E1002 20:16:01.021467 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:02.023082 kubelet[1379]: E1002 20:16:02.023027 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:03.024314 kubelet[1379]: E1002 20:16:03.024234 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:03.098965 kubelet[1379]: E1002 20:16:03.098887 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:16:04.024916 kubelet[1379]: E1002 20:16:04.024812 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:05.025824 kubelet[1379]: E1002 20:16:05.025755 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:05.243304 kubelet[1379]: E1002 20:16:05.243235 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:16:06.027693 kubelet[1379]: E1002 20:16:06.027630 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:07.028961 kubelet[1379]: E1002 20:16:07.028858 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:08.029197 kubelet[1379]: E1002 20:16:08.029091 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:08.100199 kubelet[1379]: E1002 20:16:08.100090 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:16:09.030154 kubelet[1379]: E1002 20:16:09.030091 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:10.031875 kubelet[1379]: E1002 20:16:10.031809 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:11.033612 kubelet[1379]: E1002 20:16:11.033504 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:12.035206 kubelet[1379]: E1002 20:16:12.035042 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:12.937524 kubelet[1379]: E1002 20:16:12.937399 1379 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:13.035763 kubelet[1379]: E1002 20:16:13.035693 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:13.101771 kubelet[1379]: E1002 20:16:13.101684 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:16:14.036659 kubelet[1379]: E1002 20:16:14.036548 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:15.037452 kubelet[1379]: E1002 20:16:15.037348 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:16.037747 kubelet[1379]: E1002 20:16:16.037649 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:17.038902 kubelet[1379]: E1002 20:16:17.038843 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:17.243503 kubelet[1379]: E1002 20:16:17.243439 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:16:18.040714 kubelet[1379]: E1002 20:16:18.040440 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:18.103441 kubelet[1379]: E1002 20:16:18.103410 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:16:19.041916 kubelet[1379]: E1002 20:16:19.041789 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:20.042545 kubelet[1379]: E1002 20:16:20.042488 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:21.044114 kubelet[1379]: E1002 20:16:21.044012 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:22.044931 kubelet[1379]: E1002 20:16:22.044872 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:23.047003 kubelet[1379]: E1002 20:16:23.046898 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:23.104934 kubelet[1379]: E1002 20:16:23.104860 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:16:24.047925 kubelet[1379]: E1002 20:16:24.047816 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:25.048892 kubelet[1379]: E1002 20:16:25.048790 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:26.049337 kubelet[1379]: E1002 20:16:26.049220 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:27.050281 kubelet[1379]: E1002 20:16:27.050214 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:28.051217 kubelet[1379]: E1002 20:16:28.051153 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:28.106745 kubelet[1379]: E1002 20:16:28.106709 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:16:29.052604 kubelet[1379]: E1002 20:16:29.052495 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:29.243444 kubelet[1379]: E1002 20:16:29.243393 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:16:30.053876 kubelet[1379]: E1002 20:16:30.053724 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:31.054597 kubelet[1379]: E1002 20:16:31.054478 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:32.055808 kubelet[1379]: E1002 20:16:32.055670 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:32.937523 kubelet[1379]: E1002 20:16:32.937464 1379 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:33.056732 kubelet[1379]: E1002 20:16:33.056662 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:33.108729 kubelet[1379]: E1002 20:16:33.108661 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:16:34.058723 kubelet[1379]: E1002 20:16:34.058660 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:35.060450 kubelet[1379]: E1002 20:16:35.060394 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:36.062245 kubelet[1379]: E1002 20:16:36.062191 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:37.063919 kubelet[1379]: E1002 20:16:37.063860 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:38.065552 kubelet[1379]: E1002 20:16:38.065438 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:38.110179 kubelet[1379]: E1002 20:16:38.110137 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:16:39.066008 kubelet[1379]: E1002 20:16:39.065882 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:40.067100 kubelet[1379]: E1002 20:16:40.067032 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:41.068272 kubelet[1379]: E1002 20:16:41.068144 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:42.069067 kubelet[1379]: E1002 20:16:42.068918 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:43.069950 kubelet[1379]: E1002 20:16:43.069824 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:43.112140 kubelet[1379]: E1002 20:16:43.112071 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:16:44.070459 kubelet[1379]: E1002 20:16:44.070335 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:44.248129 env[1055]: time="2023-10-02T20:16:44.248039814Z" level=info msg="CreateContainer within sandbox \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 20:16:44.276713 env[1055]: time="2023-10-02T20:16:44.276552919Z" level=info msg="CreateContainer within sandbox \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69\"" Oct 2 20:16:44.279042 env[1055]: time="2023-10-02T20:16:44.278965224Z" level=info msg="StartContainer for \"1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69\"" Oct 2 20:16:44.334441 systemd[1]: Started cri-containerd-1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69.scope. Oct 2 20:16:44.365273 systemd[1]: cri-containerd-1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69.scope: Deactivated successfully. Oct 2 20:16:44.375797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69-rootfs.mount: Deactivated successfully. Oct 2 20:16:44.384357 env[1055]: time="2023-10-02T20:16:44.384310560Z" level=info msg="shim disconnected" id=1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69 Oct 2 20:16:44.384596 env[1055]: time="2023-10-02T20:16:44.384551892Z" level=warning msg="cleaning up after shim disconnected" id=1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69 namespace=k8s.io Oct 2 20:16:44.384685 env[1055]: time="2023-10-02T20:16:44.384668459Z" level=info msg="cleaning up dead shim" Oct 2 20:16:44.392787 env[1055]: time="2023-10-02T20:16:44.392713345Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:16:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1905 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:16:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:16:44.393053 env[1055]: time="2023-10-02T20:16:44.392986134Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 20:16:44.393287 env[1055]: time="2023-10-02T20:16:44.393240189Z" level=error msg="Failed to pipe stderr of container \"1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69\"" error="reading from a closed fifo" Oct 2 20:16:44.393445 env[1055]: time="2023-10-02T20:16:44.393415696Z" level=error msg="Failed to pipe stdout of container \"1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69\"" error="reading from a closed fifo" Oct 2 20:16:44.397146 env[1055]: time="2023-10-02T20:16:44.397095579Z" level=error msg="StartContainer for \"1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:16:44.397390 kubelet[1379]: E1002 20:16:44.397354 1379 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69" Oct 2 20:16:44.397476 kubelet[1379]: E1002 20:16:44.397461 1379 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:16:44.397476 kubelet[1379]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:16:44.397476 kubelet[1379]: rm /hostbin/cilium-mount Oct 2 20:16:44.397476 kubelet[1379]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jzlxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:16:44.397710 kubelet[1379]: E1002 20:16:44.397504 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:16:44.618645 kubelet[1379]: I1002 20:16:44.618437 1379 scope.go:115] "RemoveContainer" containerID="716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994" Oct 2 20:16:44.620984 kubelet[1379]: I1002 20:16:44.620944 1379 scope.go:115] "RemoveContainer" containerID="716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994" Oct 2 20:16:44.623418 env[1055]: time="2023-10-02T20:16:44.623360938Z" level=info msg="RemoveContainer for \"716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994\"" Oct 2 20:16:44.624092 env[1055]: time="2023-10-02T20:16:44.624003649Z" level=info msg="RemoveContainer for \"716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994\"" Oct 2 20:16:44.624286 env[1055]: time="2023-10-02T20:16:44.624208091Z" level=error msg="RemoveContainer for \"716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994\" failed" error="failed to set removing state for container \"716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994\": container is already in removing state" Oct 2 20:16:44.624632 kubelet[1379]: E1002 20:16:44.624527 1379 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994\": container is already in removing state" containerID="716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994" Oct 2 20:16:44.624850 kubelet[1379]: E1002 20:16:44.624649 1379 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994": container is already in removing state; Skipping pod "cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)" Oct 2 20:16:44.625350 kubelet[1379]: E1002 20:16:44.625244 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:16:44.632339 env[1055]: time="2023-10-02T20:16:44.632250602Z" level=info msg="RemoveContainer for \"716ccbdbd37bac3367b8857a94f104e8aa8a460e88ef1bc896c5d98a43169994\" returns successfully" Oct 2 20:16:45.071414 kubelet[1379]: E1002 20:16:45.071356 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:46.072655 kubelet[1379]: E1002 20:16:46.072401 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:47.073132 kubelet[1379]: E1002 20:16:47.073073 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:47.490923 kubelet[1379]: W1002 20:16:47.490719 1379 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f4edfc2_67cc_4cfe_9338_b99187e9c818.slice/cri-containerd-1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69.scope WatchSource:0}: task 1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69 not found: not found Oct 2 20:16:48.074616 kubelet[1379]: E1002 20:16:48.074513 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:48.114177 kubelet[1379]: E1002 20:16:48.114140 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:16:49.076039 kubelet[1379]: E1002 20:16:49.075990 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:50.076929 kubelet[1379]: E1002 20:16:50.076873 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:51.078692 kubelet[1379]: E1002 20:16:51.078634 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:52.080703 kubelet[1379]: E1002 20:16:52.080523 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:52.938431 kubelet[1379]: E1002 20:16:52.938353 1379 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:53.082067 kubelet[1379]: E1002 20:16:53.082017 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:53.115456 kubelet[1379]: E1002 20:16:53.115427 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:16:54.083733 kubelet[1379]: E1002 20:16:54.083556 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:55.085441 kubelet[1379]: E1002 20:16:55.085360 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:56.086463 kubelet[1379]: E1002 20:16:56.086367 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:57.086915 kubelet[1379]: E1002 20:16:57.086836 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:57.243355 kubelet[1379]: E1002 20:16:57.243305 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:16:58.088006 kubelet[1379]: E1002 20:16:58.087904 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:16:58.117272 kubelet[1379]: E1002 20:16:58.117199 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:16:59.088797 kubelet[1379]: E1002 20:16:59.088702 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:00.089564 kubelet[1379]: E1002 20:17:00.089510 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:01.090690 kubelet[1379]: E1002 20:17:01.090508 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:02.091802 kubelet[1379]: E1002 20:17:02.091736 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:03.093651 kubelet[1379]: E1002 20:17:03.093551 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:03.119123 kubelet[1379]: E1002 20:17:03.119036 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:17:04.094972 kubelet[1379]: E1002 20:17:04.094901 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:05.096320 kubelet[1379]: E1002 20:17:05.096221 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:06.097351 kubelet[1379]: E1002 20:17:06.097292 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:07.098982 kubelet[1379]: E1002 20:17:07.098847 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:08.099421 kubelet[1379]: E1002 20:17:08.099367 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:08.121416 kubelet[1379]: E1002 20:17:08.121347 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:17:09.100947 kubelet[1379]: E1002 20:17:09.100879 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:10.102175 kubelet[1379]: E1002 20:17:10.102038 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:11.103062 kubelet[1379]: E1002 20:17:11.103006 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:12.104369 kubelet[1379]: E1002 20:17:12.104262 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:12.243782 kubelet[1379]: E1002 20:17:12.243700 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:17:12.938260 kubelet[1379]: E1002 20:17:12.938119 1379 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:13.105181 kubelet[1379]: E1002 20:17:13.105037 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:13.122920 kubelet[1379]: E1002 20:17:13.122886 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:17:14.105537 kubelet[1379]: E1002 20:17:14.105428 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:15.105846 kubelet[1379]: E1002 20:17:15.105718 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:16.107289 kubelet[1379]: E1002 20:17:16.107239 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:17.108508 kubelet[1379]: E1002 20:17:17.108477 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:18.109735 kubelet[1379]: E1002 20:17:18.109673 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:18.124785 kubelet[1379]: E1002 20:17:18.124751 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:17:19.110718 kubelet[1379]: E1002 20:17:19.110625 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:20.111030 kubelet[1379]: E1002 20:17:20.110981 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:21.112838 kubelet[1379]: E1002 20:17:21.112749 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:22.114220 kubelet[1379]: E1002 20:17:22.114144 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:23.115175 kubelet[1379]: E1002 20:17:23.115127 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:23.126501 kubelet[1379]: E1002 20:17:23.126445 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:17:24.117086 kubelet[1379]: E1002 20:17:24.117017 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:24.244255 kubelet[1379]: E1002 20:17:24.244166 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:17:25.117908 kubelet[1379]: E1002 20:17:25.117851 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:26.119077 kubelet[1379]: E1002 20:17:26.119003 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:27.120794 kubelet[1379]: E1002 20:17:27.120713 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:28.120986 kubelet[1379]: E1002 20:17:28.120930 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:28.128545 kubelet[1379]: E1002 20:17:28.128358 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:17:29.122470 kubelet[1379]: E1002 20:17:29.122413 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:30.123410 kubelet[1379]: E1002 20:17:30.123318 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:31.124780 kubelet[1379]: E1002 20:17:31.124527 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:32.125314 kubelet[1379]: E1002 20:17:32.125253 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:32.938179 kubelet[1379]: E1002 20:17:32.938121 1379 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:33.126764 kubelet[1379]: E1002 20:17:33.126690 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:33.129799 kubelet[1379]: E1002 20:17:33.129744 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:17:34.127687 kubelet[1379]: E1002 20:17:34.127630 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:35.129010 kubelet[1379]: E1002 20:17:35.128905 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:36.130136 kubelet[1379]: E1002 20:17:36.130089 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:37.131354 kubelet[1379]: E1002 20:17:37.131279 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:37.243967 kubelet[1379]: E1002 20:17:37.243905 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:17:38.131510 kubelet[1379]: E1002 20:17:38.131402 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:17:38.133508 kubelet[1379]: E1002 20:17:38.133038 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:39.134401 kubelet[1379]: E1002 20:17:39.134262 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:40.135415 kubelet[1379]: E1002 20:17:40.135354 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:41.136905 kubelet[1379]: E1002 20:17:41.136847 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:42.138137 kubelet[1379]: E1002 20:17:42.138036 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:43.133111 kubelet[1379]: E1002 20:17:43.133056 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:17:43.138236 kubelet[1379]: E1002 20:17:43.138190 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:44.138617 kubelet[1379]: E1002 20:17:44.138493 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:45.139005 kubelet[1379]: E1002 20:17:45.138929 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:46.139814 kubelet[1379]: E1002 20:17:46.139735 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:47.140008 kubelet[1379]: E1002 20:17:47.139931 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:48.134721 kubelet[1379]: E1002 20:17:48.134657 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:17:48.140431 kubelet[1379]: E1002 20:17:48.140376 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:49.141423 kubelet[1379]: E1002 20:17:49.141277 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:50.141967 kubelet[1379]: E1002 20:17:50.141836 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:50.244028 kubelet[1379]: E1002 20:17:50.243921 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:17:51.143057 kubelet[1379]: E1002 20:17:51.142997 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:52.144108 kubelet[1379]: E1002 20:17:52.143966 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:52.937426 kubelet[1379]: E1002 20:17:52.937335 1379 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:53.136223 kubelet[1379]: E1002 20:17:53.136052 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:17:53.144361 kubelet[1379]: E1002 20:17:53.144327 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:54.146166 kubelet[1379]: E1002 20:17:54.146104 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:55.147298 kubelet[1379]: E1002 20:17:55.147232 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:56.148234 kubelet[1379]: E1002 20:17:56.148055 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:57.148419 kubelet[1379]: E1002 20:17:57.148318 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:58.137925 kubelet[1379]: E1002 20:17:58.137877 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:17:58.148953 kubelet[1379]: E1002 20:17:58.148891 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:17:59.149145 kubelet[1379]: E1002 20:17:59.149071 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:00.149898 kubelet[1379]: E1002 20:18:00.149839 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:01.151414 kubelet[1379]: E1002 20:18:01.151269 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:02.152382 kubelet[1379]: E1002 20:18:02.152276 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:02.243691 kubelet[1379]: E1002 20:18:02.243645 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:18:03.139862 kubelet[1379]: E1002 20:18:03.139765 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:18:03.153327 kubelet[1379]: E1002 20:18:03.153211 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:04.154382 kubelet[1379]: E1002 20:18:04.154327 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:05.155442 kubelet[1379]: E1002 20:18:05.155373 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:06.156600 kubelet[1379]: E1002 20:18:06.156522 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:07.158285 kubelet[1379]: E1002 20:18:07.158196 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:08.141985 kubelet[1379]: E1002 20:18:08.141894 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:18:08.159334 kubelet[1379]: E1002 20:18:08.159263 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:09.161680 kubelet[1379]: E1002 20:18:09.161513 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:10.162623 kubelet[1379]: E1002 20:18:10.162487 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:11.163770 kubelet[1379]: E1002 20:18:11.163662 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:12.164000 kubelet[1379]: E1002 20:18:12.163867 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:12.937552 kubelet[1379]: E1002 20:18:12.937403 1379 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:13.143010 kubelet[1379]: E1002 20:18:13.142957 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:18:13.165141 kubelet[1379]: E1002 20:18:13.165013 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:14.165825 kubelet[1379]: E1002 20:18:14.165701 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:15.166487 kubelet[1379]: E1002 20:18:15.166426 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:16.167414 kubelet[1379]: E1002 20:18:16.167360 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:17.168118 kubelet[1379]: E1002 20:18:17.168058 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:17.249455 env[1055]: time="2023-10-02T20:18:17.249373068Z" level=info msg="CreateContainer within sandbox \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 2 20:18:17.271422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3758923877.mount: Deactivated successfully. Oct 2 20:18:17.285477 env[1055]: time="2023-10-02T20:18:17.285363539Z" level=info msg="CreateContainer within sandbox \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"b242326e2e1d6d0d1eaf508e4d6011ac5f56a8f2a2ec7e7c67eacc8a3450e5a2\"" Oct 2 20:18:17.286667 env[1055]: time="2023-10-02T20:18:17.286614933Z" level=info msg="StartContainer for \"b242326e2e1d6d0d1eaf508e4d6011ac5f56a8f2a2ec7e7c67eacc8a3450e5a2\"" Oct 2 20:18:17.338353 systemd[1]: Started cri-containerd-b242326e2e1d6d0d1eaf508e4d6011ac5f56a8f2a2ec7e7c67eacc8a3450e5a2.scope. Oct 2 20:18:17.359264 systemd[1]: cri-containerd-b242326e2e1d6d0d1eaf508e4d6011ac5f56a8f2a2ec7e7c67eacc8a3450e5a2.scope: Deactivated successfully. Oct 2 20:18:17.375457 env[1055]: time="2023-10-02T20:18:17.375401572Z" level=info msg="shim disconnected" id=b242326e2e1d6d0d1eaf508e4d6011ac5f56a8f2a2ec7e7c67eacc8a3450e5a2 Oct 2 20:18:17.375457 env[1055]: time="2023-10-02T20:18:17.375452748Z" level=warning msg="cleaning up after shim disconnected" id=b242326e2e1d6d0d1eaf508e4d6011ac5f56a8f2a2ec7e7c67eacc8a3450e5a2 namespace=k8s.io Oct 2 20:18:17.375457 env[1055]: time="2023-10-02T20:18:17.375463008Z" level=info msg="cleaning up dead shim" Oct 2 20:18:17.384103 env[1055]: time="2023-10-02T20:18:17.384043148Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:18:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1950 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:18:17Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b242326e2e1d6d0d1eaf508e4d6011ac5f56a8f2a2ec7e7c67eacc8a3450e5a2/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:18:17.384346 env[1055]: time="2023-10-02T20:18:17.384290402Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 20:18:17.384539 env[1055]: time="2023-10-02T20:18:17.384504743Z" level=error msg="Failed to pipe stdout of container \"b242326e2e1d6d0d1eaf508e4d6011ac5f56a8f2a2ec7e7c67eacc8a3450e5a2\"" error="reading from a closed fifo" Oct 2 20:18:17.385699 env[1055]: time="2023-10-02T20:18:17.385646912Z" level=error msg="Failed to pipe stderr of container \"b242326e2e1d6d0d1eaf508e4d6011ac5f56a8f2a2ec7e7c67eacc8a3450e5a2\"" error="reading from a closed fifo" Oct 2 20:18:17.389049 env[1055]: time="2023-10-02T20:18:17.389005092Z" level=error msg="StartContainer for \"b242326e2e1d6d0d1eaf508e4d6011ac5f56a8f2a2ec7e7c67eacc8a3450e5a2\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:18:17.389318 kubelet[1379]: E1002 20:18:17.389284 1379 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="b242326e2e1d6d0d1eaf508e4d6011ac5f56a8f2a2ec7e7c67eacc8a3450e5a2" Oct 2 20:18:17.389465 kubelet[1379]: E1002 20:18:17.389448 1379 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:18:17.389465 kubelet[1379]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:18:17.389465 kubelet[1379]: rm /hostbin/cilium-mount Oct 2 20:18:17.389465 kubelet[1379]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jzlxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:18:17.389654 kubelet[1379]: E1002 20:18:17.389496 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:18:17.848492 kubelet[1379]: I1002 20:18:17.847811 1379 scope.go:115] "RemoveContainer" containerID="1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69" Oct 2 20:18:17.848492 kubelet[1379]: I1002 20:18:17.848417 1379 scope.go:115] "RemoveContainer" containerID="1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69" Oct 2 20:18:17.851381 env[1055]: time="2023-10-02T20:18:17.851293911Z" level=info msg="RemoveContainer for \"1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69\"" Oct 2 20:18:17.853965 env[1055]: time="2023-10-02T20:18:17.853775939Z" level=info msg="RemoveContainer for \"1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69\"" Oct 2 20:18:17.855005 env[1055]: time="2023-10-02T20:18:17.854883143Z" level=error msg="RemoveContainer for \"1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69\" failed" error="failed to set removing state for container \"1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69\": container is already in removing state" Oct 2 20:18:17.856994 kubelet[1379]: E1002 20:18:17.855847 1379 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69\": container is already in removing state" containerID="1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69" Oct 2 20:18:17.856994 kubelet[1379]: E1002 20:18:17.855920 1379 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69": container is already in removing state; Skipping pod "cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)" Oct 2 20:18:17.856994 kubelet[1379]: E1002 20:18:17.856515 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:18:17.862923 env[1055]: time="2023-10-02T20:18:17.862819738Z" level=info msg="RemoveContainer for \"1017253d5c604dc98932f55d213a537fa2f28cd05de6405bf04d7a8512e0fe69\" returns successfully" Oct 2 20:18:18.144888 kubelet[1379]: E1002 20:18:18.144732 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:18:18.169498 kubelet[1379]: E1002 20:18:18.169396 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:18.264268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b242326e2e1d6d0d1eaf508e4d6011ac5f56a8f2a2ec7e7c67eacc8a3450e5a2-rootfs.mount: Deactivated successfully. Oct 2 20:18:19.170079 kubelet[1379]: E1002 20:18:19.170006 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:20.170612 kubelet[1379]: E1002 20:18:20.170510 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:20.484865 kubelet[1379]: W1002 20:18:20.484237 1379 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f4edfc2_67cc_4cfe_9338_b99187e9c818.slice/cri-containerd-b242326e2e1d6d0d1eaf508e4d6011ac5f56a8f2a2ec7e7c67eacc8a3450e5a2.scope WatchSource:0}: task b242326e2e1d6d0d1eaf508e4d6011ac5f56a8f2a2ec7e7c67eacc8a3450e5a2 not found: not found Oct 2 20:18:21.171540 kubelet[1379]: E1002 20:18:21.171472 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:22.172074 kubelet[1379]: E1002 20:18:22.172008 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:23.146840 kubelet[1379]: E1002 20:18:23.146758 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:18:23.173374 kubelet[1379]: E1002 20:18:23.173257 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:24.174176 kubelet[1379]: E1002 20:18:24.173939 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:25.174246 kubelet[1379]: E1002 20:18:25.174138 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:26.175089 kubelet[1379]: E1002 20:18:26.174957 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:27.175740 kubelet[1379]: E1002 20:18:27.175670 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:28.147938 kubelet[1379]: E1002 20:18:28.147835 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:18:28.176712 kubelet[1379]: E1002 20:18:28.176626 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:29.177455 kubelet[1379]: E1002 20:18:29.177388 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:29.244167 kubelet[1379]: E1002 20:18:29.244073 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-s8fg6_kube-system(4f4edfc2-67cc-4cfe-9338-b99187e9c818)\"" pod="kube-system/cilium-s8fg6" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 Oct 2 20:18:30.179229 kubelet[1379]: E1002 20:18:30.179171 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:31.180728 kubelet[1379]: E1002 20:18:31.180643 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:32.181320 kubelet[1379]: E1002 20:18:32.181265 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:32.937396 kubelet[1379]: E1002 20:18:32.937357 1379 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:33.149677 kubelet[1379]: E1002 20:18:33.149556 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:18:33.182684 kubelet[1379]: E1002 20:18:33.182636 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:34.183138 kubelet[1379]: E1002 20:18:34.183087 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:35.184865 kubelet[1379]: E1002 20:18:35.184807 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:36.045382 env[1055]: time="2023-10-02T20:18:36.045284953Z" level=info msg="StopPodSandbox for \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\"" Oct 2 20:18:36.049172 env[1055]: time="2023-10-02T20:18:36.045413064Z" level=info msg="Container to stop \"b242326e2e1d6d0d1eaf508e4d6011ac5f56a8f2a2ec7e7c67eacc8a3450e5a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:18:36.048480 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25-shm.mount: Deactivated successfully. Oct 2 20:18:36.062374 systemd[1]: cri-containerd-77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25.scope: Deactivated successfully. Oct 2 20:18:36.069309 kernel: kauditd_printk_skb: 283 callbacks suppressed Oct 2 20:18:36.069653 kernel: audit: type=1334 audit(1696277916.062:666): prog-id=71 op=UNLOAD Oct 2 20:18:36.062000 audit: BPF prog-id=71 op=UNLOAD Oct 2 20:18:36.070000 audit: BPF prog-id=74 op=UNLOAD Oct 2 20:18:36.074713 kernel: audit: type=1334 audit(1696277916.070:667): prog-id=74 op=UNLOAD Oct 2 20:18:36.117086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25-rootfs.mount: Deactivated successfully. Oct 2 20:18:36.126436 env[1055]: time="2023-10-02T20:18:36.126317340Z" level=info msg="shim disconnected" id=77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25 Oct 2 20:18:36.128339 env[1055]: time="2023-10-02T20:18:36.128281709Z" level=warning msg="cleaning up after shim disconnected" id=77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25 namespace=k8s.io Oct 2 20:18:36.128557 env[1055]: time="2023-10-02T20:18:36.128519585Z" level=info msg="cleaning up dead shim" Oct 2 20:18:36.147222 env[1055]: time="2023-10-02T20:18:36.147118263Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:18:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1988 runtime=io.containerd.runc.v2\n" Oct 2 20:18:36.147901 env[1055]: time="2023-10-02T20:18:36.147797345Z" level=info msg="TearDown network for sandbox \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\" successfully" Oct 2 20:18:36.147901 env[1055]: time="2023-10-02T20:18:36.147888025Z" level=info msg="StopPodSandbox for \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\" returns successfully" Oct 2 20:18:36.186735 kubelet[1379]: E1002 20:18:36.186651 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:36.235663 kubelet[1379]: I1002 20:18:36.235186 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4f4edfc2-67cc-4cfe-9338-b99187e9c818" (UID: "4f4edfc2-67cc-4cfe-9338-b99187e9c818"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:18:36.235663 kubelet[1379]: I1002 20:18:36.235301 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-host-proc-sys-net\") pod \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " Oct 2 20:18:36.235663 kubelet[1379]: I1002 20:18:36.235419 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4f4edfc2-67cc-4cfe-9338-b99187e9c818" (UID: "4f4edfc2-67cc-4cfe-9338-b99187e9c818"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:18:36.235663 kubelet[1379]: I1002 20:18:36.235534 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-cilium-cgroup\") pod \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " Oct 2 20:18:36.236203 kubelet[1379]: I1002 20:18:36.235688 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-cilium-run\") pod \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " Oct 2 20:18:36.236203 kubelet[1379]: I1002 20:18:36.235765 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4f4edfc2-67cc-4cfe-9338-b99187e9c818" (UID: "4f4edfc2-67cc-4cfe-9338-b99187e9c818"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:18:36.236203 kubelet[1379]: I1002 20:18:36.235995 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4f4edfc2-67cc-4cfe-9338-b99187e9c818" (UID: "4f4edfc2-67cc-4cfe-9338-b99187e9c818"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:18:36.236203 kubelet[1379]: I1002 20:18:36.236104 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-xtables-lock\") pod \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " Oct 2 20:18:36.236630 kubelet[1379]: W1002 20:18:36.236506 1379 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/4f4edfc2-67cc-4cfe-9338-b99187e9c818/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:18:36.238207 kubelet[1379]: I1002 20:18:36.236828 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f4edfc2-67cc-4cfe-9338-b99187e9c818-cilium-config-path\") pod \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " Oct 2 20:18:36.238207 kubelet[1379]: I1002 20:18:36.236924 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-cni-path\") pod \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " Oct 2 20:18:36.238207 kubelet[1379]: I1002 20:18:36.236979 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-etc-cni-netd\") pod \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " Oct 2 20:18:36.238207 kubelet[1379]: I1002 20:18:36.237029 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-bpf-maps\") pod \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " Oct 2 20:18:36.238207 kubelet[1379]: I1002 20:18:36.237094 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f4edfc2-67cc-4cfe-9338-b99187e9c818-clustermesh-secrets\") pod \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " Oct 2 20:18:36.238207 kubelet[1379]: I1002 20:18:36.237148 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-lib-modules\") pod \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " Oct 2 20:18:36.238702 kubelet[1379]: I1002 20:18:36.237202 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f4edfc2-67cc-4cfe-9338-b99187e9c818-hubble-tls\") pod \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " Oct 2 20:18:36.238702 kubelet[1379]: I1002 20:18:36.237259 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzlxn\" (UniqueName: \"kubernetes.io/projected/4f4edfc2-67cc-4cfe-9338-b99187e9c818-kube-api-access-jzlxn\") pod \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " Oct 2 20:18:36.238702 kubelet[1379]: I1002 20:18:36.237312 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-hostproc\") pod \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " Oct 2 20:18:36.238702 kubelet[1379]: I1002 20:18:36.237399 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-host-proc-sys-kernel\") pod \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\" (UID: \"4f4edfc2-67cc-4cfe-9338-b99187e9c818\") " Oct 2 20:18:36.238702 kubelet[1379]: I1002 20:18:36.237454 1379 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-xtables-lock\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:18:36.238702 kubelet[1379]: I1002 20:18:36.237484 1379 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-host-proc-sys-net\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:18:36.238702 kubelet[1379]: I1002 20:18:36.237512 1379 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-cilium-cgroup\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:18:36.239146 kubelet[1379]: I1002 20:18:36.237539 1379 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-cilium-run\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:18:36.239146 kubelet[1379]: I1002 20:18:36.237634 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4f4edfc2-67cc-4cfe-9338-b99187e9c818" (UID: "4f4edfc2-67cc-4cfe-9338-b99187e9c818"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:18:36.239146 kubelet[1379]: I1002 20:18:36.237694 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-cni-path" (OuterVolumeSpecName: "cni-path") pod "4f4edfc2-67cc-4cfe-9338-b99187e9c818" (UID: "4f4edfc2-67cc-4cfe-9338-b99187e9c818"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:18:36.239146 kubelet[1379]: I1002 20:18:36.237738 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4f4edfc2-67cc-4cfe-9338-b99187e9c818" (UID: "4f4edfc2-67cc-4cfe-9338-b99187e9c818"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:18:36.239146 kubelet[1379]: I1002 20:18:36.237778 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4f4edfc2-67cc-4cfe-9338-b99187e9c818" (UID: "4f4edfc2-67cc-4cfe-9338-b99187e9c818"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:18:36.241484 kubelet[1379]: I1002 20:18:36.241416 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f4edfc2-67cc-4cfe-9338-b99187e9c818-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4f4edfc2-67cc-4cfe-9338-b99187e9c818" (UID: "4f4edfc2-67cc-4cfe-9338-b99187e9c818"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:18:36.242207 kubelet[1379]: I1002 20:18:36.242161 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-hostproc" (OuterVolumeSpecName: "hostproc") pod "4f4edfc2-67cc-4cfe-9338-b99187e9c818" (UID: "4f4edfc2-67cc-4cfe-9338-b99187e9c818"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:18:36.242677 kubelet[1379]: I1002 20:18:36.242410 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4f4edfc2-67cc-4cfe-9338-b99187e9c818" (UID: "4f4edfc2-67cc-4cfe-9338-b99187e9c818"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:18:36.248245 systemd[1]: var-lib-kubelet-pods-4f4edfc2\x2d67cc\x2d4cfe\x2d9338\x2db99187e9c818-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:18:36.250406 kubelet[1379]: I1002 20:18:36.250356 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f4edfc2-67cc-4cfe-9338-b99187e9c818-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4f4edfc2-67cc-4cfe-9338-b99187e9c818" (UID: "4f4edfc2-67cc-4cfe-9338-b99187e9c818"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:18:36.254762 systemd[1]: var-lib-kubelet-pods-4f4edfc2\x2d67cc\x2d4cfe\x2d9338\x2db99187e9c818-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:18:36.256909 kubelet[1379]: I1002 20:18:36.256840 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f4edfc2-67cc-4cfe-9338-b99187e9c818-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4f4edfc2-67cc-4cfe-9338-b99187e9c818" (UID: "4f4edfc2-67cc-4cfe-9338-b99187e9c818"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:18:36.260540 systemd[1]: var-lib-kubelet-pods-4f4edfc2\x2d67cc\x2d4cfe\x2d9338\x2db99187e9c818-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djzlxn.mount: Deactivated successfully. Oct 2 20:18:36.262462 kubelet[1379]: I1002 20:18:36.262410 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f4edfc2-67cc-4cfe-9338-b99187e9c818-kube-api-access-jzlxn" (OuterVolumeSpecName: "kube-api-access-jzlxn") pod "4f4edfc2-67cc-4cfe-9338-b99187e9c818" (UID: "4f4edfc2-67cc-4cfe-9338-b99187e9c818"). InnerVolumeSpecName "kube-api-access-jzlxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:18:36.338058 kubelet[1379]: I1002 20:18:36.338010 1379 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-cni-path\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:18:36.338383 kubelet[1379]: I1002 20:18:36.338355 1379 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-etc-cni-netd\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:18:36.338559 kubelet[1379]: I1002 20:18:36.338537 1379 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-bpf-maps\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:18:36.338804 kubelet[1379]: I1002 20:18:36.338781 1379 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f4edfc2-67cc-4cfe-9338-b99187e9c818-clustermesh-secrets\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:18:36.338971 kubelet[1379]: I1002 20:18:36.338950 1379 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-lib-modules\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:18:36.339144 kubelet[1379]: I1002 20:18:36.339124 1379 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f4edfc2-67cc-4cfe-9338-b99187e9c818-hubble-tls\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:18:36.339311 kubelet[1379]: I1002 20:18:36.339290 1379 reconciler.go:399] "Volume detached for volume \"kube-api-access-jzlxn\" (UniqueName: \"kubernetes.io/projected/4f4edfc2-67cc-4cfe-9338-b99187e9c818-kube-api-access-jzlxn\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:18:36.339474 kubelet[1379]: I1002 20:18:36.339453 1379 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-hostproc\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:18:36.339677 kubelet[1379]: I1002 20:18:36.339654 1379 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f4edfc2-67cc-4cfe-9338-b99187e9c818-host-proc-sys-kernel\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:18:36.339892 kubelet[1379]: I1002 20:18:36.339867 1379 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f4edfc2-67cc-4cfe-9338-b99187e9c818-cilium-config-path\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:18:36.900877 kubelet[1379]: I1002 20:18:36.900838 1379 scope.go:115] "RemoveContainer" containerID="b242326e2e1d6d0d1eaf508e4d6011ac5f56a8f2a2ec7e7c67eacc8a3450e5a2" Oct 2 20:18:36.903872 env[1055]: time="2023-10-02T20:18:36.903769748Z" level=info msg="RemoveContainer for \"b242326e2e1d6d0d1eaf508e4d6011ac5f56a8f2a2ec7e7c67eacc8a3450e5a2\"" Oct 2 20:18:36.908934 systemd[1]: Removed slice kubepods-burstable-pod4f4edfc2_67cc_4cfe_9338_b99187e9c818.slice. Oct 2 20:18:36.911978 env[1055]: time="2023-10-02T20:18:36.911911729Z" level=info msg="RemoveContainer for \"b242326e2e1d6d0d1eaf508e4d6011ac5f56a8f2a2ec7e7c67eacc8a3450e5a2\" returns successfully" Oct 2 20:18:37.187054 kubelet[1379]: E1002 20:18:37.186890 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:37.249407 kubelet[1379]: I1002 20:18:37.249274 1379 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4f4edfc2-67cc-4cfe-9338-b99187e9c818 path="/var/lib/kubelet/pods/4f4edfc2-67cc-4cfe-9338-b99187e9c818/volumes" Oct 2 20:18:38.151522 kubelet[1379]: E1002 20:18:38.151410 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:18:38.188757 kubelet[1379]: E1002 20:18:38.188637 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:39.189413 kubelet[1379]: E1002 20:18:39.189212 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:40.190215 kubelet[1379]: E1002 20:18:40.190162 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:41.191609 kubelet[1379]: E1002 20:18:41.191523 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:42.169781 kubelet[1379]: I1002 20:18:42.169731 1379 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:18:42.170225 kubelet[1379]: E1002 20:18:42.170198 1379 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4f4edfc2-67cc-4cfe-9338-b99187e9c818" containerName="mount-cgroup" Oct 2 20:18:42.170458 kubelet[1379]: E1002 20:18:42.170432 1379 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4f4edfc2-67cc-4cfe-9338-b99187e9c818" containerName="mount-cgroup" Oct 2 20:18:42.170697 kubelet[1379]: E1002 20:18:42.170673 1379 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4f4edfc2-67cc-4cfe-9338-b99187e9c818" containerName="mount-cgroup" Oct 2 20:18:42.170897 kubelet[1379]: E1002 20:18:42.170872 1379 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4f4edfc2-67cc-4cfe-9338-b99187e9c818" containerName="mount-cgroup" Oct 2 20:18:42.171109 kubelet[1379]: E1002 20:18:42.171085 1379 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4f4edfc2-67cc-4cfe-9338-b99187e9c818" containerName="mount-cgroup" Oct 2 20:18:42.171382 kubelet[1379]: I1002 20:18:42.171320 1379 memory_manager.go:345] "RemoveStaleState removing state" podUID="4f4edfc2-67cc-4cfe-9338-b99187e9c818" containerName="mount-cgroup" Oct 2 20:18:42.171610 kubelet[1379]: I1002 20:18:42.171555 1379 memory_manager.go:345] "RemoveStaleState removing state" podUID="4f4edfc2-67cc-4cfe-9338-b99187e9c818" containerName="mount-cgroup" Oct 2 20:18:42.171856 kubelet[1379]: I1002 20:18:42.171808 1379 memory_manager.go:345] "RemoveStaleState removing state" podUID="4f4edfc2-67cc-4cfe-9338-b99187e9c818" containerName="mount-cgroup" Oct 2 20:18:42.172029 kubelet[1379]: I1002 20:18:42.172006 1379 memory_manager.go:345] "RemoveStaleState removing state" podUID="4f4edfc2-67cc-4cfe-9338-b99187e9c818" containerName="mount-cgroup" Oct 2 20:18:42.172228 kubelet[1379]: I1002 20:18:42.172204 1379 memory_manager.go:345] "RemoveStaleState removing state" podUID="4f4edfc2-67cc-4cfe-9338-b99187e9c818" containerName="mount-cgroup" Oct 2 20:18:42.172499 kubelet[1379]: E1002 20:18:42.172436 1379 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="4f4edfc2-67cc-4cfe-9338-b99187e9c818" containerName="mount-cgroup" Oct 2 20:18:42.172796 kubelet[1379]: I1002 20:18:42.172742 1379 memory_manager.go:345] "RemoveStaleState removing state" podUID="4f4edfc2-67cc-4cfe-9338-b99187e9c818" containerName="mount-cgroup" Oct 2 20:18:42.185328 systemd[1]: Created slice kubepods-burstable-podf9cf0db5_8913_4242_b322_2a39596646d8.slice. Oct 2 20:18:42.188665 kubelet[1379]: I1002 20:18:42.188556 1379 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:18:42.193206 kubelet[1379]: E1002 20:18:42.193176 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:42.207246 systemd[1]: Created slice kubepods-besteffort-podf0b0d7dd_4830_4d73_9a1b_b2ce68236876.slice. Oct 2 20:18:42.281519 kubelet[1379]: I1002 20:18:42.281447 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-xtables-lock\") pod \"cilium-cmmhr\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " pod="kube-system/cilium-cmmhr" Oct 2 20:18:42.282040 kubelet[1379]: I1002 20:18:42.281985 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f9cf0db5-8913-4242-b322-2a39596646d8-hubble-tls\") pod \"cilium-cmmhr\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " pod="kube-system/cilium-cmmhr" Oct 2 20:18:42.282404 kubelet[1379]: I1002 20:18:42.282350 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkzq5\" (UniqueName: \"kubernetes.io/projected/f0b0d7dd-4830-4d73-9a1b-b2ce68236876-kube-api-access-kkzq5\") pod \"cilium-operator-69b677f97c-85grh\" (UID: \"f0b0d7dd-4830-4d73-9a1b-b2ce68236876\") " pod="kube-system/cilium-operator-69b677f97c-85grh" Oct 2 20:18:42.282729 kubelet[1379]: I1002 20:18:42.282682 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f9cf0db5-8913-4242-b322-2a39596646d8-cilium-ipsec-secrets\") pod \"cilium-cmmhr\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " pod="kube-system/cilium-cmmhr" Oct 2 20:18:42.283018 kubelet[1379]: I1002 20:18:42.282993 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-host-proc-sys-net\") pod \"cilium-cmmhr\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " pod="kube-system/cilium-cmmhr" Oct 2 20:18:42.283346 kubelet[1379]: I1002 20:18:42.283321 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzmzt\" (UniqueName: \"kubernetes.io/projected/f9cf0db5-8913-4242-b322-2a39596646d8-kube-api-access-tzmzt\") pod \"cilium-cmmhr\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " pod="kube-system/cilium-cmmhr" Oct 2 20:18:42.283669 kubelet[1379]: I1002 20:18:42.283644 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-cilium-run\") pod \"cilium-cmmhr\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " pod="kube-system/cilium-cmmhr" Oct 2 20:18:42.284028 kubelet[1379]: I1002 20:18:42.284002 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-cilium-cgroup\") pod \"cilium-cmmhr\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " pod="kube-system/cilium-cmmhr" Oct 2 20:18:42.284355 kubelet[1379]: I1002 20:18:42.284327 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-etc-cni-netd\") pod \"cilium-cmmhr\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " pod="kube-system/cilium-cmmhr" Oct 2 20:18:42.284704 kubelet[1379]: I1002 20:18:42.284655 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-lib-modules\") pod \"cilium-cmmhr\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " pod="kube-system/cilium-cmmhr" Oct 2 20:18:42.285040 kubelet[1379]: I1002 20:18:42.284995 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0b0d7dd-4830-4d73-9a1b-b2ce68236876-cilium-config-path\") pod \"cilium-operator-69b677f97c-85grh\" (UID: \"f0b0d7dd-4830-4d73-9a1b-b2ce68236876\") " pod="kube-system/cilium-operator-69b677f97c-85grh" Oct 2 20:18:42.285331 kubelet[1379]: I1002 20:18:42.285305 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-bpf-maps\") pod \"cilium-cmmhr\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " pod="kube-system/cilium-cmmhr" Oct 2 20:18:42.285660 kubelet[1379]: I1002 20:18:42.285635 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f9cf0db5-8913-4242-b322-2a39596646d8-clustermesh-secrets\") pod \"cilium-cmmhr\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " pod="kube-system/cilium-cmmhr" Oct 2 20:18:42.285983 kubelet[1379]: I1002 20:18:42.285959 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-host-proc-sys-kernel\") pod \"cilium-cmmhr\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " pod="kube-system/cilium-cmmhr" Oct 2 20:18:42.286302 kubelet[1379]: I1002 20:18:42.286277 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-hostproc\") pod \"cilium-cmmhr\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " pod="kube-system/cilium-cmmhr" Oct 2 20:18:42.286649 kubelet[1379]: I1002 20:18:42.286625 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-cni-path\") pod \"cilium-cmmhr\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " pod="kube-system/cilium-cmmhr" Oct 2 20:18:42.286916 kubelet[1379]: I1002 20:18:42.286891 1379 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f9cf0db5-8913-4242-b322-2a39596646d8-cilium-config-path\") pod \"cilium-cmmhr\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " pod="kube-system/cilium-cmmhr" Oct 2 20:18:42.499804 env[1055]: time="2023-10-02T20:18:42.499642694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cmmhr,Uid:f9cf0db5-8913-4242-b322-2a39596646d8,Namespace:kube-system,Attempt:0,}" Oct 2 20:18:42.514916 env[1055]: time="2023-10-02T20:18:42.514863194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-85grh,Uid:f0b0d7dd-4830-4d73-9a1b-b2ce68236876,Namespace:kube-system,Attempt:0,}" Oct 2 20:18:42.522888 env[1055]: time="2023-10-02T20:18:42.522405272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:18:42.522888 env[1055]: time="2023-10-02T20:18:42.522504828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:18:42.522888 env[1055]: time="2023-10-02T20:18:42.522529464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:18:42.523197 env[1055]: time="2023-10-02T20:18:42.523033058Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65 pid=2017 runtime=io.containerd.runc.v2 Oct 2 20:18:42.549114 env[1055]: time="2023-10-02T20:18:42.547730040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:18:42.549114 env[1055]: time="2023-10-02T20:18:42.547851186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:18:42.549114 env[1055]: time="2023-10-02T20:18:42.547877425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:18:42.549114 env[1055]: time="2023-10-02T20:18:42.548230678Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f64a8d8974a4c9754a8580096305ed50a9973d97a9ff1aef3fdafc7e795debc pid=2040 runtime=io.containerd.runc.v2 Oct 2 20:18:42.552930 systemd[1]: Started cri-containerd-f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65.scope. Oct 2 20:18:42.575384 systemd[1]: Started cri-containerd-8f64a8d8974a4c9754a8580096305ed50a9973d97a9ff1aef3fdafc7e795debc.scope. Oct 2 20:18:42.579000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.579000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.588690 kernel: audit: type=1400 audit(1696277922.579:668): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.588758 kernel: audit: type=1400 audit(1696277922.579:669): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.588782 kernel: audit: type=1400 audit(1696277922.579:670): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.579000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.579000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.596955 kernel: audit: type=1400 audit(1696277922.579:671): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.604245 kernel: audit: type=1400 audit(1696277922.579:672): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.604355 kernel: audit: type=1400 audit(1696277922.579:673): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.604383 kernel: audit: type=1400 audit(1696277922.579:674): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.579000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.579000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.579000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.579000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.614593 kernel: audit: type=1400 audit(1696277922.579:675): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.579000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.618602 kernel: audit: type=1400 audit(1696277922.579:676): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.579000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.623603 kernel: audit: type=1400 audit(1696277922.579:677): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.579000 audit: BPF prog-id=78 op=LOAD Oct 2 20:18:42.581000 audit[2028]: AVC avc: denied { bpf } for pid=2028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.581000 audit[2028]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c00014dc48 a2=10 a3=1c items=0 ppid=2017 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:18:42.581000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633643938643366626561333934313861616462613366663631353262 Oct 2 20:18:42.581000 audit[2028]: AVC avc: denied { perfmon } for pid=2028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.581000 audit[2028]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c00014d6b0 a2=3c a3=c items=0 ppid=2017 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:18:42.581000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633643938643366626561333934313861616462613366663631353262 Oct 2 20:18:42.581000 audit[2028]: AVC avc: denied { bpf } for pid=2028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.581000 audit[2028]: AVC avc: denied { bpf } for pid=2028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.581000 audit[2028]: AVC avc: denied { bpf } for pid=2028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.581000 audit[2028]: AVC avc: denied { perfmon } for pid=2028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.581000 audit[2028]: AVC avc: denied { perfmon } for pid=2028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.581000 audit[2028]: AVC avc: denied { perfmon } for pid=2028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.581000 audit[2028]: AVC avc: denied { perfmon } for pid=2028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.581000 audit[2028]: AVC avc: denied { perfmon } for pid=2028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.581000 audit[2028]: AVC avc: denied { bpf } for pid=2028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.581000 audit[2028]: AVC avc: denied { bpf } for pid=2028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.581000 audit: BPF prog-id=79 op=LOAD Oct 2 20:18:42.581000 audit[2028]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00014d9d8 a2=78 a3=c000204c10 items=0 ppid=2017 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:18:42.581000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633643938643366626561333934313861616462613366663631353262 Oct 2 20:18:42.583000 audit[2028]: AVC avc: denied { bpf } for pid=2028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.583000 audit[2028]: AVC avc: denied { bpf } for pid=2028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.583000 audit[2028]: AVC avc: denied { perfmon } for pid=2028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.583000 audit[2028]: AVC avc: denied { perfmon } for pid=2028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.583000 audit[2028]: AVC avc: denied { perfmon } for pid=2028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.583000 audit[2028]: AVC avc: denied { perfmon } for pid=2028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.583000 audit[2028]: AVC avc: denied { perfmon } for pid=2028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.583000 audit[2028]: AVC avc: denied { bpf } for pid=2028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.583000 audit[2028]: AVC avc: denied { bpf } for pid=2028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.583000 audit: BPF prog-id=80 op=LOAD Oct 2 20:18:42.583000 audit[2028]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00014d770 a2=78 a3=c000204c58 items=0 ppid=2017 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:18:42.583000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633643938643366626561333934313861616462613366663631353262 Oct 2 20:18:42.590000 audit: BPF prog-id=80 op=UNLOAD Oct 2 20:18:42.590000 audit: BPF prog-id=79 op=UNLOAD Oct 2 20:18:42.590000 audit[2028]: AVC avc: denied { bpf } for pid=2028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.590000 audit[2028]: AVC avc: denied { bpf } for pid=2028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.590000 audit[2028]: AVC avc: denied { bpf } for pid=2028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.590000 audit[2028]: AVC avc: denied { perfmon } for pid=2028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.590000 audit[2028]: AVC avc: denied { perfmon } for pid=2028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.590000 audit[2028]: AVC avc: denied { perfmon } for pid=2028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.590000 audit[2028]: AVC avc: denied { perfmon } for pid=2028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.590000 audit[2028]: AVC avc: denied { perfmon } for pid=2028 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.590000 audit[2028]: AVC avc: denied { bpf } for pid=2028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.590000 audit[2028]: AVC avc: denied { bpf } for pid=2028 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.590000 audit: BPF prog-id=81 op=LOAD Oct 2 20:18:42.590000 audit[2028]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00014dc30 a2=78 a3=c000205068 items=0 ppid=2017 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:18:42.590000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633643938643366626561333934313861616462613366663631353262 Oct 2 20:18:42.622000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.622000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.622000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.622000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.622000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.622000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.622000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.622000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.622000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.622000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.622000 audit: BPF prog-id=82 op=LOAD Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { bpf } for pid=2052 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=2040 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:18:42.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866363461386438393734613463393735346138353830303936333035 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { perfmon } for pid=2052 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=2040 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:18:42.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866363461386438393734613463393735346138353830303936333035 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { bpf } for pid=2052 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { bpf } for pid=2052 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { bpf } for pid=2052 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { perfmon } for pid=2052 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { perfmon } for pid=2052 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { perfmon } for pid=2052 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { perfmon } for pid=2052 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { perfmon } for pid=2052 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { bpf } for pid=2052 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { bpf } for pid=2052 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit: BPF prog-id=83 op=LOAD Oct 2 20:18:42.624000 audit[2052]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c0001cc140 items=0 ppid=2040 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:18:42.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866363461386438393734613463393735346138353830303936333035 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { bpf } for pid=2052 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { bpf } for pid=2052 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { perfmon } for pid=2052 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { perfmon } for pid=2052 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { perfmon } for pid=2052 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { perfmon } for pid=2052 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { perfmon } for pid=2052 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { bpf } for pid=2052 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { bpf } for pid=2052 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit: BPF prog-id=84 op=LOAD Oct 2 20:18:42.624000 audit[2052]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c0001cc188 items=0 ppid=2040 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:18:42.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866363461386438393734613463393735346138353830303936333035 Oct 2 20:18:42.624000 audit: BPF prog-id=84 op=UNLOAD Oct 2 20:18:42.624000 audit: BPF prog-id=83 op=UNLOAD Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { bpf } for pid=2052 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { bpf } for pid=2052 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { bpf } for pid=2052 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { perfmon } for pid=2052 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { perfmon } for pid=2052 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { perfmon } for pid=2052 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { perfmon } for pid=2052 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { perfmon } for pid=2052 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { bpf } for pid=2052 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit[2052]: AVC avc: denied { bpf } for pid=2052 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:42.624000 audit: BPF prog-id=85 op=LOAD Oct 2 20:18:42.624000 audit[2052]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c0001cc598 items=0 ppid=2040 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:18:42.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866363461386438393734613463393735346138353830303936333035 Oct 2 20:18:42.636277 env[1055]: time="2023-10-02T20:18:42.636241357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cmmhr,Uid:f9cf0db5-8913-4242-b322-2a39596646d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65\"" Oct 2 20:18:42.640588 env[1055]: time="2023-10-02T20:18:42.639400405Z" level=info msg="CreateContainer within sandbox \"f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 20:18:42.661002 env[1055]: time="2023-10-02T20:18:42.660954169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-85grh,Uid:f0b0d7dd-4830-4d73-9a1b-b2ce68236876,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f64a8d8974a4c9754a8580096305ed50a9973d97a9ff1aef3fdafc7e795debc\"" Oct 2 20:18:42.663393 env[1055]: time="2023-10-02T20:18:42.663369463Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\"" Oct 2 20:18:42.676063 env[1055]: time="2023-10-02T20:18:42.676014169Z" level=info msg="CreateContainer within sandbox \"f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f\"" Oct 2 20:18:42.676783 env[1055]: time="2023-10-02T20:18:42.676738867Z" level=info msg="StartContainer for \"15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f\"" Oct 2 20:18:42.692318 systemd[1]: Started cri-containerd-15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f.scope. Oct 2 20:18:42.704655 systemd[1]: cri-containerd-15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f.scope: Deactivated successfully. Oct 2 20:18:42.727152 env[1055]: time="2023-10-02T20:18:42.727105512Z" level=info msg="shim disconnected" id=15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f Oct 2 20:18:42.727152 env[1055]: time="2023-10-02T20:18:42.727157209Z" level=warning msg="cleaning up after shim disconnected" id=15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f namespace=k8s.io Oct 2 20:18:42.727344 env[1055]: time="2023-10-02T20:18:42.727167839Z" level=info msg="cleaning up dead shim" Oct 2 20:18:42.734716 env[1055]: time="2023-10-02T20:18:42.734670233Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:18:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2114 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:18:42Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:18:42.735109 env[1055]: time="2023-10-02T20:18:42.735059442Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Oct 2 20:18:42.737679 env[1055]: time="2023-10-02T20:18:42.737642962Z" level=error msg="Failed to pipe stdout of container \"15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f\"" error="reading from a closed fifo" Oct 2 20:18:42.738003 env[1055]: time="2023-10-02T20:18:42.737769808Z" level=error msg="Failed to pipe stderr of container \"15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f\"" error="reading from a closed fifo" Oct 2 20:18:42.741094 env[1055]: time="2023-10-02T20:18:42.741063138Z" level=error msg="StartContainer for \"15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:18:42.741540 kubelet[1379]: E1002 20:18:42.741363 1379 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f" Oct 2 20:18:42.741540 kubelet[1379]: E1002 20:18:42.741472 1379 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:18:42.741540 kubelet[1379]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:18:42.741540 kubelet[1379]: rm /hostbin/cilium-mount Oct 2 20:18:42.741722 kubelet[1379]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tzmzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-cmmhr_kube-system(f9cf0db5-8913-4242-b322-2a39596646d8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:18:42.741824 kubelet[1379]: E1002 20:18:42.741515 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-cmmhr" podUID=f9cf0db5-8913-4242-b322-2a39596646d8 Oct 2 20:18:42.925650 env[1055]: time="2023-10-02T20:18:42.925515210Z" level=info msg="CreateContainer within sandbox \"f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 20:18:42.956043 env[1055]: time="2023-10-02T20:18:42.955949760Z" level=info msg="CreateContainer within sandbox \"f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75\"" Oct 2 20:18:42.957541 env[1055]: time="2023-10-02T20:18:42.957398062Z" level=info msg="StartContainer for \"ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75\"" Oct 2 20:18:42.994769 systemd[1]: Started cri-containerd-ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75.scope. Oct 2 20:18:43.016469 systemd[1]: cri-containerd-ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75.scope: Deactivated successfully. Oct 2 20:18:43.033534 env[1055]: time="2023-10-02T20:18:43.033448579Z" level=info msg="shim disconnected" id=ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75 Oct 2 20:18:43.033949 env[1055]: time="2023-10-02T20:18:43.033906357Z" level=warning msg="cleaning up after shim disconnected" id=ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75 namespace=k8s.io Oct 2 20:18:43.034116 env[1055]: time="2023-10-02T20:18:43.034081755Z" level=info msg="cleaning up dead shim" Oct 2 20:18:43.051084 env[1055]: time="2023-10-02T20:18:43.050946606Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:18:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2149 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:18:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:18:43.052682 env[1055]: time="2023-10-02T20:18:43.052538918Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Oct 2 20:18:43.053164 env[1055]: time="2023-10-02T20:18:43.053084280Z" level=error msg="Failed to pipe stderr of container \"ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75\"" error="reading from a closed fifo" Oct 2 20:18:43.053857 env[1055]: time="2023-10-02T20:18:43.053784172Z" level=error msg="Failed to pipe stdout of container \"ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75\"" error="reading from a closed fifo" Oct 2 20:18:43.058079 env[1055]: time="2023-10-02T20:18:43.057992955Z" level=error msg="StartContainer for \"ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:18:43.058593 kubelet[1379]: E1002 20:18:43.058306 1379 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75" Oct 2 20:18:43.058940 kubelet[1379]: E1002 20:18:43.058860 1379 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:18:43.058940 kubelet[1379]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:18:43.058940 kubelet[1379]: rm /hostbin/cilium-mount Oct 2 20:18:43.058940 kubelet[1379]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tzmzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-cmmhr_kube-system(f9cf0db5-8913-4242-b322-2a39596646d8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:18:43.059155 kubelet[1379]: E1002 20:18:43.058902 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-cmmhr" podUID=f9cf0db5-8913-4242-b322-2a39596646d8 Oct 2 20:18:43.152729 kubelet[1379]: E1002 20:18:43.152687 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:18:43.196139 kubelet[1379]: E1002 20:18:43.194816 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:43.935820 kubelet[1379]: I1002 20:18:43.934827 1379 scope.go:115] "RemoveContainer" containerID="15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f" Oct 2 20:18:43.935820 kubelet[1379]: I1002 20:18:43.935458 1379 scope.go:115] "RemoveContainer" containerID="15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f" Oct 2 20:18:43.939966 env[1055]: time="2023-10-02T20:18:43.939848213Z" level=info msg="RemoveContainer for \"15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f\"" Oct 2 20:18:43.948220 env[1055]: time="2023-10-02T20:18:43.947461826Z" level=info msg="RemoveContainer for \"15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f\"" Oct 2 20:18:43.952222 env[1055]: time="2023-10-02T20:18:43.952049168Z" level=error msg="RemoveContainer for \"15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f\" failed" error="failed to set removing state for container \"15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f\": container is already in removing state" Oct 2 20:18:43.953197 kubelet[1379]: E1002 20:18:43.953001 1379 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f\": container is already in removing state" containerID="15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f" Oct 2 20:18:43.953197 kubelet[1379]: E1002 20:18:43.953104 1379 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f": container is already in removing state; Skipping pod "cilium-cmmhr_kube-system(f9cf0db5-8913-4242-b322-2a39596646d8)" Oct 2 20:18:43.954024 kubelet[1379]: E1002 20:18:43.953909 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-cmmhr_kube-system(f9cf0db5-8913-4242-b322-2a39596646d8)\"" pod="kube-system/cilium-cmmhr" podUID=f9cf0db5-8913-4242-b322-2a39596646d8 Oct 2 20:18:43.958435 env[1055]: time="2023-10-02T20:18:43.958339241Z" level=info msg="RemoveContainer for \"15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f\" returns successfully" Oct 2 20:18:44.187269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1050947663.mount: Deactivated successfully. Oct 2 20:18:44.196181 kubelet[1379]: E1002 20:18:44.196104 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:44.938470 kubelet[1379]: E1002 20:18:44.938405 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-cmmhr_kube-system(f9cf0db5-8913-4242-b322-2a39596646d8)\"" pod="kube-system/cilium-cmmhr" podUID=f9cf0db5-8913-4242-b322-2a39596646d8 Oct 2 20:18:45.196786 kubelet[1379]: E1002 20:18:45.196603 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:45.302121 env[1055]: time="2023-10-02T20:18:45.302051820Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:18:45.304321 env[1055]: time="2023-10-02T20:18:45.304268783Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:18:45.307737 env[1055]: time="2023-10-02T20:18:45.307689831Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:18:45.309066 env[1055]: time="2023-10-02T20:18:45.309022638Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\" returns image reference \"sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e\"" Oct 2 20:18:45.314258 env[1055]: time="2023-10-02T20:18:45.314192652Z" level=info msg="CreateContainer within sandbox \"8f64a8d8974a4c9754a8580096305ed50a9973d97a9ff1aef3fdafc7e795debc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 20:18:45.336332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3229253016.mount: Deactivated successfully. Oct 2 20:18:45.342924 env[1055]: time="2023-10-02T20:18:45.342861344Z" level=info msg="CreateContainer within sandbox \"8f64a8d8974a4c9754a8580096305ed50a9973d97a9ff1aef3fdafc7e795debc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303\"" Oct 2 20:18:45.344532 env[1055]: time="2023-10-02T20:18:45.344426255Z" level=info msg="StartContainer for \"3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303\"" Oct 2 20:18:45.371202 systemd[1]: Started cri-containerd-3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303.scope. Oct 2 20:18:45.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.391000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.391000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.392000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.392000 audit: BPF prog-id=86 op=LOAD Oct 2 20:18:45.393000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.393000 audit[2170]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=2040 pid=2170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:18:45.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337353464626337373666383132333964636361363263373033323438 Oct 2 20:18:45.394000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.394000 audit[2170]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=8 items=0 ppid=2040 pid=2170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:18:45.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337353464626337373666383132333964636361363263373033323438 Oct 2 20:18:45.394000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.394000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.394000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.394000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.394000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.394000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.394000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.394000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.394000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.394000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.394000 audit: BPF prog-id=87 op=LOAD Oct 2 20:18:45.394000 audit[2170]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c000024eb0 items=0 ppid=2040 pid=2170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:18:45.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337353464626337373666383132333964636361363263373033323438 Oct 2 20:18:45.398000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.398000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.398000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.398000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.398000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.398000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.398000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.398000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.398000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.398000 audit: BPF prog-id=88 op=LOAD Oct 2 20:18:45.398000 audit[2170]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c000024ef8 items=0 ppid=2040 pid=2170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:18:45.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337353464626337373666383132333964636361363263373033323438 Oct 2 20:18:45.400000 audit: BPF prog-id=88 op=UNLOAD Oct 2 20:18:45.400000 audit: BPF prog-id=87 op=UNLOAD Oct 2 20:18:45.400000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.400000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.400000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.400000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.400000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.400000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.400000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.400000 audit[2170]: AVC avc: denied { perfmon } for pid=2170 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.400000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.400000 audit[2170]: AVC avc: denied { bpf } for pid=2170 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:18:45.400000 audit: BPF prog-id=89 op=LOAD Oct 2 20:18:45.400000 audit[2170]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c000025308 items=0 ppid=2040 pid=2170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:18:45.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337353464626337373666383132333964636361363263373033323438 Oct 2 20:18:45.423022 env[1055]: time="2023-10-02T20:18:45.422989440Z" level=info msg="StartContainer for \"3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303\" returns successfully" Oct 2 20:18:45.439000 audit[2182]: AVC avc: denied { map_create } for pid=2182 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c120,c808 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c120,c808 tclass=bpf permissive=0 Oct 2 20:18:45.439000 audit[2182]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c00061f7d0 a2=48 a3=c00061f7c0 items=0 ppid=2040 pid=2182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c120,c808 key=(null) Oct 2 20:18:45.439000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 20:18:45.834660 kubelet[1379]: W1002 20:18:45.834550 1379 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9cf0db5_8913_4242_b322_2a39596646d8.slice/cri-containerd-15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f.scope WatchSource:0}: container "15c95660cb7d8e68cbb04348923948323e65018fda48b6ae04f9ce6cb941008f" in namespace "k8s.io": not found Oct 2 20:18:46.197794 kubelet[1379]: E1002 20:18:46.197650 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:47.198483 kubelet[1379]: E1002 20:18:47.198434 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:48.154949 kubelet[1379]: E1002 20:18:48.154870 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:18:48.200006 kubelet[1379]: E1002 20:18:48.199791 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:48.945498 kubelet[1379]: W1002 20:18:48.945418 1379 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9cf0db5_8913_4242_b322_2a39596646d8.slice/cri-containerd-ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75.scope WatchSource:0}: task ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75 not found: not found Oct 2 20:18:49.200679 kubelet[1379]: E1002 20:18:49.200423 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:50.201506 kubelet[1379]: E1002 20:18:50.201452 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:51.202537 kubelet[1379]: E1002 20:18:51.202439 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:52.203235 kubelet[1379]: E1002 20:18:52.203151 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:52.938185 kubelet[1379]: E1002 20:18:52.938134 1379 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:53.156519 kubelet[1379]: E1002 20:18:53.156440 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:18:53.204286 kubelet[1379]: E1002 20:18:53.204148 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:54.206078 kubelet[1379]: E1002 20:18:54.206017 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:55.207019 kubelet[1379]: E1002 20:18:55.206923 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:56.208071 kubelet[1379]: E1002 20:18:56.207982 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:56.249168 env[1055]: time="2023-10-02T20:18:56.249046743Z" level=info msg="CreateContainer within sandbox \"f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 20:18:56.280427 env[1055]: time="2023-10-02T20:18:56.280286552Z" level=info msg="CreateContainer within sandbox \"f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273\"" Oct 2 20:18:56.281794 env[1055]: time="2023-10-02T20:18:56.281742590Z" level=info msg="StartContainer for \"efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273\"" Oct 2 20:18:56.342855 systemd[1]: Started cri-containerd-efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273.scope. Oct 2 20:18:56.353383 systemd[1]: cri-containerd-efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273.scope: Deactivated successfully. Oct 2 20:18:56.353757 systemd[1]: Stopped cri-containerd-efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273.scope. Oct 2 20:18:56.359737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273-rootfs.mount: Deactivated successfully. Oct 2 20:18:56.665472 env[1055]: time="2023-10-02T20:18:56.665384030Z" level=info msg="shim disconnected" id=efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273 Oct 2 20:18:56.666027 env[1055]: time="2023-10-02T20:18:56.665980247Z" level=warning msg="cleaning up after shim disconnected" id=efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273 namespace=k8s.io Oct 2 20:18:56.666205 env[1055]: time="2023-10-02T20:18:56.666169642Z" level=info msg="cleaning up dead shim" Oct 2 20:18:56.682926 env[1055]: time="2023-10-02T20:18:56.682834178Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:18:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2226 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:18:56Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:18:56.683409 env[1055]: time="2023-10-02T20:18:56.683295913Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 20:18:56.686746 env[1055]: time="2023-10-02T20:18:56.686657800Z" level=error msg="Failed to pipe stdout of container \"efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273\"" error="reading from a closed fifo" Oct 2 20:18:56.686874 env[1055]: time="2023-10-02T20:18:56.686802271Z" level=error msg="Failed to pipe stderr of container \"efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273\"" error="reading from a closed fifo" Oct 2 20:18:56.691065 env[1055]: time="2023-10-02T20:18:56.690986559Z" level=error msg="StartContainer for \"efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:18:56.691802 kubelet[1379]: E1002 20:18:56.691745 1379 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273" Oct 2 20:18:56.692732 kubelet[1379]: E1002 20:18:56.692066 1379 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:18:56.692732 kubelet[1379]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:18:56.692732 kubelet[1379]: rm /hostbin/cilium-mount Oct 2 20:18:56.692732 kubelet[1379]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tzmzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-cmmhr_kube-system(f9cf0db5-8913-4242-b322-2a39596646d8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:18:56.693149 kubelet[1379]: E1002 20:18:56.692236 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-cmmhr" podUID=f9cf0db5-8913-4242-b322-2a39596646d8 Oct 2 20:18:56.979145 kubelet[1379]: I1002 20:18:56.975268 1379 scope.go:115] "RemoveContainer" containerID="ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75" Oct 2 20:18:56.979145 kubelet[1379]: I1002 20:18:56.976709 1379 scope.go:115] "RemoveContainer" containerID="ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75" Oct 2 20:18:56.982756 env[1055]: time="2023-10-02T20:18:56.982613002Z" level=info msg="RemoveContainer for \"ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75\"" Oct 2 20:18:56.985121 env[1055]: time="2023-10-02T20:18:56.985062901Z" level=info msg="RemoveContainer for \"ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75\"" Oct 2 20:18:56.985711 env[1055]: time="2023-10-02T20:18:56.985551366Z" level=error msg="RemoveContainer for \"ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75\" failed" error="failed to set removing state for container \"ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75\": container is already in removing state" Oct 2 20:18:56.986666 kubelet[1379]: E1002 20:18:56.986631 1379 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75\": container is already in removing state" containerID="ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75" Oct 2 20:18:56.986984 kubelet[1379]: I1002 20:18:56.986959 1379 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75} err="rpc error: code = Unknown desc = failed to set removing state for container \"ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75\": container is already in removing state" Oct 2 20:18:56.989911 env[1055]: time="2023-10-02T20:18:56.989797410Z" level=info msg="RemoveContainer for \"ceba74d8870a0b1f45f33e17cb89cd9a58302961f3b1227752daf4165b46ed75\" returns successfully" Oct 2 20:18:56.991564 kubelet[1379]: E1002 20:18:56.991529 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-cmmhr_kube-system(f9cf0db5-8913-4242-b322-2a39596646d8)\"" pod="kube-system/cilium-cmmhr" podUID=f9cf0db5-8913-4242-b322-2a39596646d8 Oct 2 20:18:57.208202 kubelet[1379]: E1002 20:18:57.208151 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:58.157877 kubelet[1379]: E1002 20:18:58.157806 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:18:58.209027 kubelet[1379]: E1002 20:18:58.208979 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:59.210103 kubelet[1379]: E1002 20:18:59.210054 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:18:59.772153 kubelet[1379]: W1002 20:18:59.772061 1379 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9cf0db5_8913_4242_b322_2a39596646d8.slice/cri-containerd-efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273.scope WatchSource:0}: task efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273 not found: not found Oct 2 20:19:00.211809 kubelet[1379]: E1002 20:19:00.211764 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:01.212925 kubelet[1379]: E1002 20:19:01.212875 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:02.214478 kubelet[1379]: E1002 20:19:02.214416 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:03.159206 kubelet[1379]: E1002 20:19:03.159124 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:19:03.214853 kubelet[1379]: E1002 20:19:03.214812 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:04.216235 kubelet[1379]: E1002 20:19:04.216166 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:05.217080 kubelet[1379]: E1002 20:19:05.217035 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:06.218282 kubelet[1379]: E1002 20:19:06.218233 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:07.220065 kubelet[1379]: E1002 20:19:07.220015 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:08.160980 kubelet[1379]: E1002 20:19:08.160911 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:19:08.221578 kubelet[1379]: E1002 20:19:08.221529 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:08.243759 kubelet[1379]: E1002 20:19:08.243714 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-cmmhr_kube-system(f9cf0db5-8913-4242-b322-2a39596646d8)\"" pod="kube-system/cilium-cmmhr" podUID=f9cf0db5-8913-4242-b322-2a39596646d8 Oct 2 20:19:09.222738 kubelet[1379]: E1002 20:19:09.222682 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:10.224359 kubelet[1379]: E1002 20:19:10.224302 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:11.226079 kubelet[1379]: E1002 20:19:11.225999 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:12.227519 kubelet[1379]: E1002 20:19:12.227453 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:12.938245 kubelet[1379]: E1002 20:19:12.938160 1379 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:13.163129 kubelet[1379]: E1002 20:19:13.163060 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:19:13.228111 kubelet[1379]: E1002 20:19:13.227995 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:14.229317 kubelet[1379]: E1002 20:19:14.229267 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:15.231098 kubelet[1379]: E1002 20:19:15.231019 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:16.232320 kubelet[1379]: E1002 20:19:16.232198 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:17.232356 kubelet[1379]: E1002 20:19:17.232309 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:18.164498 kubelet[1379]: E1002 20:19:18.164423 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:19:18.233942 kubelet[1379]: E1002 20:19:18.233832 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:19.235075 kubelet[1379]: E1002 20:19:19.235009 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:20.235245 kubelet[1379]: E1002 20:19:20.235199 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:21.236726 kubelet[1379]: E1002 20:19:21.236678 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:21.247733 env[1055]: time="2023-10-02T20:19:21.247426063Z" level=info msg="CreateContainer within sandbox \"f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 20:19:21.266093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount313679303.mount: Deactivated successfully. Oct 2 20:19:21.280388 env[1055]: time="2023-10-02T20:19:21.280308851Z" level=info msg="CreateContainer within sandbox \"f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"415197271bb394e96f39d96944c8114b73a77769f96fcda6911dc96363b7a451\"" Oct 2 20:19:21.282247 env[1055]: time="2023-10-02T20:19:21.282124893Z" level=info msg="StartContainer for \"415197271bb394e96f39d96944c8114b73a77769f96fcda6911dc96363b7a451\"" Oct 2 20:19:21.330242 systemd[1]: Started cri-containerd-415197271bb394e96f39d96944c8114b73a77769f96fcda6911dc96363b7a451.scope. Oct 2 20:19:21.348279 systemd[1]: cri-containerd-415197271bb394e96f39d96944c8114b73a77769f96fcda6911dc96363b7a451.scope: Deactivated successfully. Oct 2 20:19:21.378405 env[1055]: time="2023-10-02T20:19:21.378311016Z" level=info msg="shim disconnected" id=415197271bb394e96f39d96944c8114b73a77769f96fcda6911dc96363b7a451 Oct 2 20:19:21.378405 env[1055]: time="2023-10-02T20:19:21.378356451Z" level=warning msg="cleaning up after shim disconnected" id=415197271bb394e96f39d96944c8114b73a77769f96fcda6911dc96363b7a451 namespace=k8s.io Oct 2 20:19:21.378405 env[1055]: time="2023-10-02T20:19:21.378368383Z" level=info msg="cleaning up dead shim" Oct 2 20:19:21.393295 env[1055]: time="2023-10-02T20:19:21.393204365Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:19:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2269 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:19:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:19:21Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/415197271bb394e96f39d96944c8114b73a77769f96fcda6911dc96363b7a451/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:19:21.394223 env[1055]: time="2023-10-02T20:19:21.394119620Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 20:19:21.394696 env[1055]: time="2023-10-02T20:19:21.394620709Z" level=error msg="Failed to pipe stdout of container \"415197271bb394e96f39d96944c8114b73a77769f96fcda6911dc96363b7a451\"" error="reading from a closed fifo" Oct 2 20:19:21.395339 env[1055]: time="2023-10-02T20:19:21.394885615Z" level=error msg="Failed to pipe stderr of container \"415197271bb394e96f39d96944c8114b73a77769f96fcda6911dc96363b7a451\"" error="reading from a closed fifo" Oct 2 20:19:21.397129 env[1055]: time="2023-10-02T20:19:21.397052535Z" level=error msg="StartContainer for \"415197271bb394e96f39d96944c8114b73a77769f96fcda6911dc96363b7a451\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:19:21.398729 kubelet[1379]: E1002 20:19:21.397752 1379 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="415197271bb394e96f39d96944c8114b73a77769f96fcda6911dc96363b7a451" Oct 2 20:19:21.398729 kubelet[1379]: E1002 20:19:21.397939 1379 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:19:21.398729 kubelet[1379]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:19:21.398729 kubelet[1379]: rm /hostbin/cilium-mount Oct 2 20:19:21.399191 kubelet[1379]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-tzmzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-cmmhr_kube-system(f9cf0db5-8913-4242-b322-2a39596646d8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:19:21.399348 kubelet[1379]: E1002 20:19:21.398028 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-cmmhr" podUID=f9cf0db5-8913-4242-b322-2a39596646d8 Oct 2 20:19:22.037316 kubelet[1379]: I1002 20:19:22.037275 1379 scope.go:115] "RemoveContainer" containerID="efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273" Oct 2 20:19:22.038282 kubelet[1379]: I1002 20:19:22.038250 1379 scope.go:115] "RemoveContainer" containerID="efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273" Oct 2 20:19:22.040940 env[1055]: time="2023-10-02T20:19:22.040868881Z" level=info msg="RemoveContainer for \"efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273\"" Oct 2 20:19:22.042428 env[1055]: time="2023-10-02T20:19:22.042327834Z" level=info msg="RemoveContainer for \"efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273\"" Oct 2 20:19:22.042560 env[1055]: time="2023-10-02T20:19:22.042497050Z" level=error msg="RemoveContainer for \"efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273\" failed" error="failed to set removing state for container \"efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273\": container is already in removing state" Oct 2 20:19:22.042910 kubelet[1379]: E1002 20:19:22.042874 1379 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273\": container is already in removing state" containerID="efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273" Oct 2 20:19:22.043052 kubelet[1379]: E1002 20:19:22.042937 1379 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273": container is already in removing state; Skipping pod "cilium-cmmhr_kube-system(f9cf0db5-8913-4242-b322-2a39596646d8)" Oct 2 20:19:22.043628 kubelet[1379]: E1002 20:19:22.043562 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-cmmhr_kube-system(f9cf0db5-8913-4242-b322-2a39596646d8)\"" pod="kube-system/cilium-cmmhr" podUID=f9cf0db5-8913-4242-b322-2a39596646d8 Oct 2 20:19:22.046858 env[1055]: time="2023-10-02T20:19:22.046794470Z" level=info msg="RemoveContainer for \"efa912d18f51074be984f9f906f91b0c08d915a29a22173903c77803fdb14273\" returns successfully" Oct 2 20:19:22.238018 kubelet[1379]: E1002 20:19:22.237969 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:22.260815 systemd[1]: run-containerd-runc-k8s.io-415197271bb394e96f39d96944c8114b73a77769f96fcda6911dc96363b7a451-runc.cQ99MW.mount: Deactivated successfully. Oct 2 20:19:22.261052 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-415197271bb394e96f39d96944c8114b73a77769f96fcda6911dc96363b7a451-rootfs.mount: Deactivated successfully. Oct 2 20:19:23.165724 kubelet[1379]: E1002 20:19:23.165650 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:19:23.239184 kubelet[1379]: E1002 20:19:23.239121 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:24.240156 kubelet[1379]: E1002 20:19:24.240088 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:24.485319 kubelet[1379]: W1002 20:19:24.485223 1379 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9cf0db5_8913_4242_b322_2a39596646d8.slice/cri-containerd-415197271bb394e96f39d96944c8114b73a77769f96fcda6911dc96363b7a451.scope WatchSource:0}: task 415197271bb394e96f39d96944c8114b73a77769f96fcda6911dc96363b7a451 not found: not found Oct 2 20:19:25.241170 kubelet[1379]: E1002 20:19:25.241114 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:26.242661 kubelet[1379]: E1002 20:19:26.242604 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:27.243627 kubelet[1379]: E1002 20:19:27.243533 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:28.167738 kubelet[1379]: E1002 20:19:28.167692 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:19:28.243900 kubelet[1379]: E1002 20:19:28.243858 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:29.244554 kubelet[1379]: E1002 20:19:29.244483 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:30.245317 kubelet[1379]: E1002 20:19:30.245236 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:31.245522 kubelet[1379]: E1002 20:19:31.245485 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:32.247106 kubelet[1379]: E1002 20:19:32.247022 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:32.937712 kubelet[1379]: E1002 20:19:32.937619 1379 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:32.975930 env[1055]: time="2023-10-02T20:19:32.975443064Z" level=info msg="StopPodSandbox for \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\"" Oct 2 20:19:32.975930 env[1055]: time="2023-10-02T20:19:32.975693945Z" level=info msg="TearDown network for sandbox \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\" successfully" Oct 2 20:19:32.975930 env[1055]: time="2023-10-02T20:19:32.975776078Z" level=info msg="StopPodSandbox for \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\" returns successfully" Oct 2 20:19:32.977634 env[1055]: time="2023-10-02T20:19:32.977527158Z" level=info msg="RemovePodSandbox for \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\"" Oct 2 20:19:32.977767 env[1055]: time="2023-10-02T20:19:32.977647213Z" level=info msg="Forcibly stopping sandbox \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\"" Oct 2 20:19:32.977956 env[1055]: time="2023-10-02T20:19:32.977907021Z" level=info msg="TearDown network for sandbox \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\" successfully" Oct 2 20:19:32.984240 env[1055]: time="2023-10-02T20:19:32.984132032Z" level=info msg="RemovePodSandbox \"77d12711858939b4e756db43c799fd5e9ec630e739f801100f179c98d7fefe25\" returns successfully" Oct 2 20:19:33.169161 kubelet[1379]: E1002 20:19:33.169075 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:19:33.247740 kubelet[1379]: E1002 20:19:33.247241 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:34.248019 kubelet[1379]: E1002 20:19:34.247955 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:35.249035 kubelet[1379]: E1002 20:19:35.248983 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:36.242983 kubelet[1379]: E1002 20:19:36.242891 1379 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-cmmhr_kube-system(f9cf0db5-8913-4242-b322-2a39596646d8)\"" pod="kube-system/cilium-cmmhr" podUID=f9cf0db5-8913-4242-b322-2a39596646d8 Oct 2 20:19:36.249642 kubelet[1379]: E1002 20:19:36.249599 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:37.250381 kubelet[1379]: E1002 20:19:37.250338 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:38.170859 kubelet[1379]: E1002 20:19:38.170817 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:19:38.251869 kubelet[1379]: E1002 20:19:38.251817 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:39.253094 kubelet[1379]: E1002 20:19:39.253054 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:40.253981 kubelet[1379]: E1002 20:19:40.253887 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:41.254245 kubelet[1379]: E1002 20:19:41.254120 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:42.254744 kubelet[1379]: E1002 20:19:42.254641 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:42.307958 env[1055]: time="2023-10-02T20:19:42.307846586Z" level=info msg="StopPodSandbox for \"f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65\"" Oct 2 20:19:42.313560 env[1055]: time="2023-10-02T20:19:42.307983733Z" level=info msg="Container to stop \"415197271bb394e96f39d96944c8114b73a77769f96fcda6911dc96363b7a451\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:19:42.311932 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65-shm.mount: Deactivated successfully. Oct 2 20:19:42.327345 systemd[1]: cri-containerd-f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65.scope: Deactivated successfully. Oct 2 20:19:42.326000 audit: BPF prog-id=78 op=UNLOAD Oct 2 20:19:42.331526 kernel: kauditd_printk_skb: 164 callbacks suppressed Oct 2 20:19:42.331717 kernel: audit: type=1334 audit(1696277982.326:723): prog-id=78 op=UNLOAD Oct 2 20:19:42.336433 kernel: audit: type=1334 audit(1696277982.333:724): prog-id=81 op=UNLOAD Oct 2 20:19:42.333000 audit: BPF prog-id=81 op=UNLOAD Oct 2 20:19:42.365527 env[1055]: time="2023-10-02T20:19:42.365430578Z" level=info msg="StopContainer for \"3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303\" with timeout 30 (s)" Oct 2 20:19:42.367164 env[1055]: time="2023-10-02T20:19:42.367109202Z" level=info msg="Stop container \"3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303\" with signal terminated" Oct 2 20:19:42.385713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65-rootfs.mount: Deactivated successfully. Oct 2 20:19:42.401000 audit: BPF prog-id=86 op=UNLOAD Oct 2 20:19:42.406956 env[1055]: time="2023-10-02T20:19:42.402480884Z" level=info msg="shim disconnected" id=f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65 Oct 2 20:19:42.406956 env[1055]: time="2023-10-02T20:19:42.402566374Z" level=warning msg="cleaning up after shim disconnected" id=f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65 namespace=k8s.io Oct 2 20:19:42.406956 env[1055]: time="2023-10-02T20:19:42.404060954Z" level=info msg="cleaning up dead shim" Oct 2 20:19:42.402941 systemd[1]: cri-containerd-3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303.scope: Deactivated successfully. Oct 2 20:19:42.411900 kernel: audit: type=1334 audit(1696277982.401:725): prog-id=86 op=UNLOAD Oct 2 20:19:42.412186 kernel: audit: type=1334 audit(1696277982.406:726): prog-id=89 op=UNLOAD Oct 2 20:19:42.406000 audit: BPF prog-id=89 op=UNLOAD Oct 2 20:19:42.427027 env[1055]: time="2023-10-02T20:19:42.426985598Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:19:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2316 runtime=io.containerd.runc.v2\n" Oct 2 20:19:42.427379 env[1055]: time="2023-10-02T20:19:42.427354128Z" level=info msg="TearDown network for sandbox \"f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65\" successfully" Oct 2 20:19:42.427445 env[1055]: time="2023-10-02T20:19:42.427378073Z" level=info msg="StopPodSandbox for \"f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65\" returns successfully" Oct 2 20:19:42.432116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303-rootfs.mount: Deactivated successfully. Oct 2 20:19:42.438886 env[1055]: time="2023-10-02T20:19:42.438850074Z" level=info msg="shim disconnected" id=3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303 Oct 2 20:19:42.439011 env[1055]: time="2023-10-02T20:19:42.438887845Z" level=warning msg="cleaning up after shim disconnected" id=3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303 namespace=k8s.io Oct 2 20:19:42.439011 env[1055]: time="2023-10-02T20:19:42.438897393Z" level=info msg="cleaning up dead shim" Oct 2 20:19:42.446650 env[1055]: time="2023-10-02T20:19:42.446611514Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:19:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2341 runtime=io.containerd.runc.v2\n" Oct 2 20:19:42.449895 env[1055]: time="2023-10-02T20:19:42.449864226Z" level=info msg="StopContainer for \"3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303\" returns successfully" Oct 2 20:19:42.450341 env[1055]: time="2023-10-02T20:19:42.450312827Z" level=info msg="StopPodSandbox for \"8f64a8d8974a4c9754a8580096305ed50a9973d97a9ff1aef3fdafc7e795debc\"" Oct 2 20:19:42.450388 env[1055]: time="2023-10-02T20:19:42.450368431Z" level=info msg="Container to stop \"3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:19:42.451707 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f64a8d8974a4c9754a8580096305ed50a9973d97a9ff1aef3fdafc7e795debc-shm.mount: Deactivated successfully. Oct 2 20:19:42.458985 systemd[1]: cri-containerd-8f64a8d8974a4c9754a8580096305ed50a9973d97a9ff1aef3fdafc7e795debc.scope: Deactivated successfully. Oct 2 20:19:42.460612 kernel: audit: type=1334 audit(1696277982.457:727): prog-id=82 op=UNLOAD Oct 2 20:19:42.457000 audit: BPF prog-id=82 op=UNLOAD Oct 2 20:19:42.462000 audit: BPF prog-id=85 op=UNLOAD Oct 2 20:19:42.465582 kernel: audit: type=1334 audit(1696277982.462:728): prog-id=85 op=UNLOAD Oct 2 20:19:42.483710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f64a8d8974a4c9754a8580096305ed50a9973d97a9ff1aef3fdafc7e795debc-rootfs.mount: Deactivated successfully. Oct 2 20:19:42.495721 env[1055]: time="2023-10-02T20:19:42.495677929Z" level=info msg="shim disconnected" id=8f64a8d8974a4c9754a8580096305ed50a9973d97a9ff1aef3fdafc7e795debc Oct 2 20:19:42.495847 env[1055]: time="2023-10-02T20:19:42.495724426Z" level=warning msg="cleaning up after shim disconnected" id=8f64a8d8974a4c9754a8580096305ed50a9973d97a9ff1aef3fdafc7e795debc namespace=k8s.io Oct 2 20:19:42.495847 env[1055]: time="2023-10-02T20:19:42.495734725Z" level=info msg="cleaning up dead shim" Oct 2 20:19:42.503998 env[1055]: time="2023-10-02T20:19:42.503958621Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:19:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2375 runtime=io.containerd.runc.v2\n" Oct 2 20:19:42.504470 env[1055]: time="2023-10-02T20:19:42.504444221Z" level=info msg="TearDown network for sandbox \"8f64a8d8974a4c9754a8580096305ed50a9973d97a9ff1aef3fdafc7e795debc\" successfully" Oct 2 20:19:42.504558 env[1055]: time="2023-10-02T20:19:42.504539178Z" level=info msg="StopPodSandbox for \"8f64a8d8974a4c9754a8580096305ed50a9973d97a9ff1aef3fdafc7e795debc\" returns successfully" Oct 2 20:19:42.559430 kubelet[1379]: I1002 20:19:42.556207 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-bpf-maps\") pod \"f9cf0db5-8913-4242-b322-2a39596646d8\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " Oct 2 20:19:42.559430 kubelet[1379]: I1002 20:19:42.556237 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f9cf0db5-8913-4242-b322-2a39596646d8" (UID: "f9cf0db5-8913-4242-b322-2a39596646d8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:19:42.559430 kubelet[1379]: I1002 20:19:42.556277 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-cni-path\") pod \"f9cf0db5-8913-4242-b322-2a39596646d8\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " Oct 2 20:19:42.559430 kubelet[1379]: I1002 20:19:42.556313 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzmzt\" (UniqueName: \"kubernetes.io/projected/f9cf0db5-8913-4242-b322-2a39596646d8-kube-api-access-tzmzt\") pod \"f9cf0db5-8913-4242-b322-2a39596646d8\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " Oct 2 20:19:42.559430 kubelet[1379]: I1002 20:19:42.556336 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-etc-cni-netd\") pod \"f9cf0db5-8913-4242-b322-2a39596646d8\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " Oct 2 20:19:42.559430 kubelet[1379]: I1002 20:19:42.556361 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0b0d7dd-4830-4d73-9a1b-b2ce68236876-cilium-config-path\") pod \"f0b0d7dd-4830-4d73-9a1b-b2ce68236876\" (UID: \"f0b0d7dd-4830-4d73-9a1b-b2ce68236876\") " Oct 2 20:19:42.560115 kubelet[1379]: I1002 20:19:42.556382 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-host-proc-sys-kernel\") pod \"f9cf0db5-8913-4242-b322-2a39596646d8\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " Oct 2 20:19:42.560115 kubelet[1379]: I1002 20:19:42.556405 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f9cf0db5-8913-4242-b322-2a39596646d8-hubble-tls\") pod \"f9cf0db5-8913-4242-b322-2a39596646d8\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " Oct 2 20:19:42.560115 kubelet[1379]: I1002 20:19:42.556425 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-host-proc-sys-net\") pod \"f9cf0db5-8913-4242-b322-2a39596646d8\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " Oct 2 20:19:42.560115 kubelet[1379]: I1002 20:19:42.556445 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-hostproc\") pod \"f9cf0db5-8913-4242-b322-2a39596646d8\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " Oct 2 20:19:42.560115 kubelet[1379]: I1002 20:19:42.556467 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-xtables-lock\") pod \"f9cf0db5-8913-4242-b322-2a39596646d8\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " Oct 2 20:19:42.560115 kubelet[1379]: I1002 20:19:42.556490 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f9cf0db5-8913-4242-b322-2a39596646d8-cilium-ipsec-secrets\") pod \"f9cf0db5-8913-4242-b322-2a39596646d8\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " Oct 2 20:19:42.560639 kubelet[1379]: I1002 20:19:42.556510 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-cilium-run\") pod \"f9cf0db5-8913-4242-b322-2a39596646d8\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " Oct 2 20:19:42.560639 kubelet[1379]: I1002 20:19:42.556529 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-lib-modules\") pod \"f9cf0db5-8913-4242-b322-2a39596646d8\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " Oct 2 20:19:42.560639 kubelet[1379]: I1002 20:19:42.556559 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkzq5\" (UniqueName: \"kubernetes.io/projected/f0b0d7dd-4830-4d73-9a1b-b2ce68236876-kube-api-access-kkzq5\") pod \"f0b0d7dd-4830-4d73-9a1b-b2ce68236876\" (UID: \"f0b0d7dd-4830-4d73-9a1b-b2ce68236876\") " Oct 2 20:19:42.560639 kubelet[1379]: I1002 20:19:42.556595 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-cilium-cgroup\") pod \"f9cf0db5-8913-4242-b322-2a39596646d8\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " Oct 2 20:19:42.560639 kubelet[1379]: I1002 20:19:42.556619 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f9cf0db5-8913-4242-b322-2a39596646d8-clustermesh-secrets\") pod \"f9cf0db5-8913-4242-b322-2a39596646d8\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " Oct 2 20:19:42.560639 kubelet[1379]: I1002 20:19:42.556643 1379 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f9cf0db5-8913-4242-b322-2a39596646d8-cilium-config-path\") pod \"f9cf0db5-8913-4242-b322-2a39596646d8\" (UID: \"f9cf0db5-8913-4242-b322-2a39596646d8\") " Oct 2 20:19:42.561060 kubelet[1379]: I1002 20:19:42.556676 1379 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-bpf-maps\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:19:42.561060 kubelet[1379]: W1002 20:19:42.556911 1379 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/f9cf0db5-8913-4242-b322-2a39596646d8/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:19:42.561060 kubelet[1379]: I1002 20:19:42.559766 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-cni-path" (OuterVolumeSpecName: "cni-path") pod "f9cf0db5-8913-4242-b322-2a39596646d8" (UID: "f9cf0db5-8913-4242-b322-2a39596646d8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:19:42.561060 kubelet[1379]: I1002 20:19:42.560645 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f9cf0db5-8913-4242-b322-2a39596646d8" (UID: "f9cf0db5-8913-4242-b322-2a39596646d8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:19:42.561060 kubelet[1379]: W1002 20:19:42.560960 1379 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/f0b0d7dd-4830-4d73-9a1b-b2ce68236876/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:19:42.563668 kubelet[1379]: I1002 20:19:42.561946 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f9cf0db5-8913-4242-b322-2a39596646d8" (UID: "f9cf0db5-8913-4242-b322-2a39596646d8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:19:42.563668 kubelet[1379]: I1002 20:19:42.562017 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f9cf0db5-8913-4242-b322-2a39596646d8" (UID: "f9cf0db5-8913-4242-b322-2a39596646d8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:19:42.563668 kubelet[1379]: I1002 20:19:42.562403 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f9cf0db5-8913-4242-b322-2a39596646d8" (UID: "f9cf0db5-8913-4242-b322-2a39596646d8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:19:42.565170 kubelet[1379]: I1002 20:19:42.565121 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9cf0db5-8913-4242-b322-2a39596646d8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f9cf0db5-8913-4242-b322-2a39596646d8" (UID: "f9cf0db5-8913-4242-b322-2a39596646d8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:19:42.568151 kubelet[1379]: I1002 20:19:42.568102 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f9cf0db5-8913-4242-b322-2a39596646d8" (UID: "f9cf0db5-8913-4242-b322-2a39596646d8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:19:42.568286 kubelet[1379]: I1002 20:19:42.568159 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f9cf0db5-8913-4242-b322-2a39596646d8" (UID: "f9cf0db5-8913-4242-b322-2a39596646d8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:19:42.569017 kubelet[1379]: I1002 20:19:42.568973 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-hostproc" (OuterVolumeSpecName: "hostproc") pod "f9cf0db5-8913-4242-b322-2a39596646d8" (UID: "f9cf0db5-8913-4242-b322-2a39596646d8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:19:42.569150 kubelet[1379]: I1002 20:19:42.569047 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f9cf0db5-8913-4242-b322-2a39596646d8" (UID: "f9cf0db5-8913-4242-b322-2a39596646d8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:19:42.569462 kubelet[1379]: I1002 20:19:42.569400 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9cf0db5-8913-4242-b322-2a39596646d8-kube-api-access-tzmzt" (OuterVolumeSpecName: "kube-api-access-tzmzt") pod "f9cf0db5-8913-4242-b322-2a39596646d8" (UID: "f9cf0db5-8913-4242-b322-2a39596646d8"). InnerVolumeSpecName "kube-api-access-tzmzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:19:42.570547 kubelet[1379]: I1002 20:19:42.570503 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0b0d7dd-4830-4d73-9a1b-b2ce68236876-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f0b0d7dd-4830-4d73-9a1b-b2ce68236876" (UID: "f0b0d7dd-4830-4d73-9a1b-b2ce68236876"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:19:42.575499 kubelet[1379]: I1002 20:19:42.575436 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9cf0db5-8913-4242-b322-2a39596646d8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f9cf0db5-8913-4242-b322-2a39596646d8" (UID: "f9cf0db5-8913-4242-b322-2a39596646d8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:19:42.575730 kubelet[1379]: I1002 20:19:42.575566 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9cf0db5-8913-4242-b322-2a39596646d8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f9cf0db5-8913-4242-b322-2a39596646d8" (UID: "f9cf0db5-8913-4242-b322-2a39596646d8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:19:42.578138 kubelet[1379]: I1002 20:19:42.578089 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0b0d7dd-4830-4d73-9a1b-b2ce68236876-kube-api-access-kkzq5" (OuterVolumeSpecName: "kube-api-access-kkzq5") pod "f0b0d7dd-4830-4d73-9a1b-b2ce68236876" (UID: "f0b0d7dd-4830-4d73-9a1b-b2ce68236876"). InnerVolumeSpecName "kube-api-access-kkzq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:19:42.579813 kubelet[1379]: I1002 20:19:42.579737 1379 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9cf0db5-8913-4242-b322-2a39596646d8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f9cf0db5-8913-4242-b322-2a39596646d8" (UID: "f9cf0db5-8913-4242-b322-2a39596646d8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:19:42.657322 kubelet[1379]: I1002 20:19:42.657225 1379 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-lib-modules\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:19:42.657322 kubelet[1379]: I1002 20:19:42.657282 1379 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-hostproc\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:19:42.657322 kubelet[1379]: I1002 20:19:42.657319 1379 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-xtables-lock\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:19:42.657818 kubelet[1379]: I1002 20:19:42.657391 1379 reconciler.go:399] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f9cf0db5-8913-4242-b322-2a39596646d8-cilium-ipsec-secrets\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:19:42.657818 kubelet[1379]: I1002 20:19:42.657425 1379 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-cilium-run\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:19:42.657818 kubelet[1379]: I1002 20:19:42.657455 1379 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f9cf0db5-8913-4242-b322-2a39596646d8-cilium-config-path\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:19:42.657818 kubelet[1379]: I1002 20:19:42.657484 1379 reconciler.go:399] "Volume detached for volume \"kube-api-access-kkzq5\" (UniqueName: \"kubernetes.io/projected/f0b0d7dd-4830-4d73-9a1b-b2ce68236876-kube-api-access-kkzq5\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:19:42.657818 kubelet[1379]: I1002 20:19:42.657511 1379 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-cilium-cgroup\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:19:42.657818 kubelet[1379]: I1002 20:19:42.657539 1379 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f9cf0db5-8913-4242-b322-2a39596646d8-clustermesh-secrets\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:19:42.657818 kubelet[1379]: I1002 20:19:42.657564 1379 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-cni-path\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:19:42.657818 kubelet[1379]: I1002 20:19:42.657693 1379 reconciler.go:399] "Volume detached for volume \"kube-api-access-tzmzt\" (UniqueName: \"kubernetes.io/projected/f9cf0db5-8913-4242-b322-2a39596646d8-kube-api-access-tzmzt\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:19:42.658390 kubelet[1379]: I1002 20:19:42.657749 1379 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-etc-cni-netd\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:19:42.658390 kubelet[1379]: I1002 20:19:42.657779 1379 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0b0d7dd-4830-4d73-9a1b-b2ce68236876-cilium-config-path\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:19:42.658390 kubelet[1379]: I1002 20:19:42.657807 1379 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-host-proc-sys-kernel\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:19:42.658390 kubelet[1379]: I1002 20:19:42.657834 1379 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f9cf0db5-8913-4242-b322-2a39596646d8-hubble-tls\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:19:42.658390 kubelet[1379]: I1002 20:19:42.657882 1379 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f9cf0db5-8913-4242-b322-2a39596646d8-host-proc-sys-net\") on node \"172.24.4.201\" DevicePath \"\"" Oct 2 20:19:43.100790 kubelet[1379]: I1002 20:19:43.100751 1379 scope.go:115] "RemoveContainer" containerID="415197271bb394e96f39d96944c8114b73a77769f96fcda6911dc96363b7a451" Oct 2 20:19:43.105483 env[1055]: time="2023-10-02T20:19:43.104938372Z" level=info msg="RemoveContainer for \"415197271bb394e96f39d96944c8114b73a77769f96fcda6911dc96363b7a451\"" Oct 2 20:19:43.105735 systemd[1]: Removed slice kubepods-burstable-podf9cf0db5_8913_4242_b322_2a39596646d8.slice. Oct 2 20:19:43.109736 env[1055]: time="2023-10-02T20:19:43.109632216Z" level=info msg="RemoveContainer for \"415197271bb394e96f39d96944c8114b73a77769f96fcda6911dc96363b7a451\" returns successfully" Oct 2 20:19:43.114257 kubelet[1379]: I1002 20:19:43.114199 1379 scope.go:115] "RemoveContainer" containerID="3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303" Oct 2 20:19:43.117384 env[1055]: time="2023-10-02T20:19:43.116830230Z" level=info msg="RemoveContainer for \"3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303\"" Oct 2 20:19:43.120796 env[1055]: time="2023-10-02T20:19:43.120734453Z" level=info msg="RemoveContainer for \"3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303\" returns successfully" Oct 2 20:19:43.121460 kubelet[1379]: I1002 20:19:43.121385 1379 scope.go:115] "RemoveContainer" containerID="3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303" Oct 2 20:19:43.122220 env[1055]: time="2023-10-02T20:19:43.121974537Z" level=error msg="ContainerStatus for \"3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303\": not found" Oct 2 20:19:43.122696 kubelet[1379]: E1002 20:19:43.122636 1379 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303\": not found" containerID="3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303" Oct 2 20:19:43.122835 kubelet[1379]: I1002 20:19:43.122736 1379 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303} err="failed to get container status \"3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303\": rpc error: code = NotFound desc = an error occurred when try to find container \"3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303\": not found" Oct 2 20:19:43.129276 systemd[1]: Removed slice kubepods-besteffort-podf0b0d7dd_4830_4d73_9a1b_b2ce68236876.slice. Oct 2 20:19:43.172954 kubelet[1379]: E1002 20:19:43.172918 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:19:43.245258 env[1055]: time="2023-10-02T20:19:43.244880135Z" level=info msg="StopPodSandbox for \"f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65\"" Oct 2 20:19:43.245258 env[1055]: time="2023-10-02T20:19:43.245045995Z" level=info msg="TearDown network for sandbox \"f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65\" successfully" Oct 2 20:19:43.245258 env[1055]: time="2023-10-02T20:19:43.245137206Z" level=info msg="StopPodSandbox for \"f3d98d3fbea39418aadba3ff6152b3d368e864be83d000efb74b474664ebec65\" returns successfully" Oct 2 20:19:43.246263 env[1055]: time="2023-10-02T20:19:43.246157407Z" level=info msg="StopContainer for \"3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303\" with timeout 1 (s)" Oct 2 20:19:43.246466 env[1055]: time="2023-10-02T20:19:43.246313380Z" level=error msg="StopContainer for \"3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303\": not found" Oct 2 20:19:43.247675 kubelet[1379]: E1002 20:19:43.247643 1379 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303\": not found" containerID="3754dbc776f81239dcca62c70324812aee76153d4093c07669ab282b2650c303" Oct 2 20:19:43.248662 env[1055]: time="2023-10-02T20:19:43.248527197Z" level=info msg="StopPodSandbox for \"8f64a8d8974a4c9754a8580096305ed50a9973d97a9ff1aef3fdafc7e795debc\"" Oct 2 20:19:43.248946 env[1055]: time="2023-10-02T20:19:43.248797092Z" level=info msg="TearDown network for sandbox \"8f64a8d8974a4c9754a8580096305ed50a9973d97a9ff1aef3fdafc7e795debc\" successfully" Oct 2 20:19:43.248946 env[1055]: time="2023-10-02T20:19:43.248915424Z" level=info msg="StopPodSandbox for \"8f64a8d8974a4c9754a8580096305ed50a9973d97a9ff1aef3fdafc7e795debc\" returns successfully" Oct 2 20:19:43.249838 kubelet[1379]: I1002 20:19:43.249773 1379 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=f0b0d7dd-4830-4d73-9a1b-b2ce68236876 path="/var/lib/kubelet/pods/f0b0d7dd-4830-4d73-9a1b-b2ce68236876/volumes" Oct 2 20:19:43.251945 kubelet[1379]: I1002 20:19:43.251880 1379 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=f9cf0db5-8913-4242-b322-2a39596646d8 path="/var/lib/kubelet/pods/f9cf0db5-8913-4242-b322-2a39596646d8/volumes" Oct 2 20:19:43.254878 kubelet[1379]: E1002 20:19:43.254847 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:43.311342 systemd[1]: var-lib-kubelet-pods-f9cf0db5\x2d8913\x2d4242\x2db322\x2d2a39596646d8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtzmzt.mount: Deactivated successfully. Oct 2 20:19:43.311736 systemd[1]: var-lib-kubelet-pods-f0b0d7dd\x2d4830\x2d4d73\x2d9a1b\x2db2ce68236876-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkkzq5.mount: Deactivated successfully. Oct 2 20:19:43.311894 systemd[1]: var-lib-kubelet-pods-f9cf0db5\x2d8913\x2d4242\x2db322\x2d2a39596646d8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:19:43.312062 systemd[1]: var-lib-kubelet-pods-f9cf0db5\x2d8913\x2d4242\x2db322\x2d2a39596646d8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 20:19:43.312207 systemd[1]: var-lib-kubelet-pods-f9cf0db5\x2d8913\x2d4242\x2db322\x2d2a39596646d8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:19:44.256399 kubelet[1379]: E1002 20:19:44.256353 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:45.257687 kubelet[1379]: E1002 20:19:45.257646 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:46.259059 kubelet[1379]: E1002 20:19:46.258946 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:47.260298 kubelet[1379]: E1002 20:19:47.260250 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:48.174509 kubelet[1379]: E1002 20:19:48.174436 1379 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:19:48.260947 kubelet[1379]: E1002 20:19:48.260815 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:19:49.262130 kubelet[1379]: E1002 20:19:49.262073 1379 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"