Oct 2 19:31:45.135051 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 19:31:45.135101 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:31:45.135116 kernel: BIOS-provided physical RAM map: Oct 2 19:31:45.135127 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 2 19:31:45.135139 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 2 19:31:45.135150 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 2 19:31:45.135169 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Oct 2 19:31:45.135180 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Oct 2 19:31:45.135191 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Oct 2 19:31:45.135203 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 2 19:31:45.135217 kernel: NX (Execute Disable) protection: active Oct 2 19:31:45.135229 kernel: SMBIOS 2.7 present. Oct 2 19:31:45.135242 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Oct 2 19:31:45.135254 kernel: Hypervisor detected: KVM Oct 2 19:31:45.135271 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 2 19:31:45.135284 kernel: kvm-clock: cpu 0, msr 71f8a001, primary cpu clock Oct 2 19:31:45.135297 kernel: kvm-clock: using sched offset of 6167614558 cycles Oct 2 19:31:45.135311 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 2 19:31:45.135324 kernel: tsc: Detected 2500.006 MHz processor Oct 2 19:31:45.135337 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 19:31:45.135353 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 19:31:45.135367 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Oct 2 19:31:45.135380 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 19:31:45.135393 kernel: Using GB pages for direct mapping Oct 2 19:31:45.135406 kernel: ACPI: Early table checksum verification disabled Oct 2 19:31:45.135418 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Oct 2 19:31:45.135431 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Oct 2 19:31:45.135444 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 2 19:31:45.135457 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Oct 2 19:31:45.135473 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Oct 2 19:31:45.135487 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Oct 2 19:31:45.135501 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 2 19:31:45.135515 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Oct 2 19:31:45.135529 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 2 19:31:45.135544 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Oct 2 19:31:45.135557 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Oct 2 19:31:45.135571 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Oct 2 19:31:45.135586 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Oct 2 19:31:45.135600 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Oct 2 19:31:45.135614 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Oct 2 19:31:45.135634 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Oct 2 19:31:45.135648 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Oct 2 19:31:45.135663 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Oct 2 19:31:45.135678 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Oct 2 19:31:45.135694 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Oct 2 19:31:45.135709 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Oct 2 19:31:45.135722 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Oct 2 19:31:45.135736 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 2 19:31:45.135751 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 2 19:31:45.135764 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Oct 2 19:31:45.135778 kernel: NUMA: Initialized distance table, cnt=1 Oct 2 19:31:45.135792 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Oct 2 19:31:45.135820 kernel: Zone ranges: Oct 2 19:31:45.135834 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 19:31:45.135848 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Oct 2 19:31:45.135990 kernel: Normal empty Oct 2 19:31:45.136049 kernel: Movable zone start for each node Oct 2 19:31:45.136064 kernel: Early memory node ranges Oct 2 19:31:45.136103 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 2 19:31:45.136116 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Oct 2 19:31:45.136129 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Oct 2 19:31:45.136148 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:31:45.136162 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 2 19:31:45.136176 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Oct 2 19:31:45.136190 kernel: ACPI: PM-Timer IO Port: 0xb008 Oct 2 19:31:45.136204 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 2 19:31:45.136218 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Oct 2 19:31:45.136231 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 2 19:31:45.136246 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 19:31:45.136260 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 2 19:31:45.136278 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 2 19:31:45.136291 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 19:31:45.136305 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 2 19:31:45.136320 kernel: TSC deadline timer available Oct 2 19:31:45.136334 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 2 19:31:45.136348 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Oct 2 19:31:45.136361 kernel: Booting paravirtualized kernel on KVM Oct 2 19:31:45.136374 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 19:31:45.136388 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Oct 2 19:31:45.136404 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Oct 2 19:31:45.136419 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Oct 2 19:31:45.136432 kernel: pcpu-alloc: [0] 0 1 Oct 2 19:31:45.136446 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Oct 2 19:31:45.136460 kernel: kvm-guest: PV spinlocks enabled Oct 2 19:31:45.136474 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 2 19:31:45.136488 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Oct 2 19:31:45.136501 kernel: Policy zone: DMA32 Oct 2 19:31:45.136517 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:31:45.136535 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:31:45.136549 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:31:45.136563 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 2 19:31:45.136577 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:31:45.136592 kernel: Memory: 1936476K/2057760K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 121024K reserved, 0K cma-reserved) Oct 2 19:31:45.136606 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 2 19:31:45.136620 kernel: Kernel/User page tables isolation: enabled Oct 2 19:31:45.136634 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 19:31:45.136650 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 19:31:45.136664 kernel: rcu: Hierarchical RCU implementation. Oct 2 19:31:45.136679 kernel: rcu: RCU event tracing is enabled. Oct 2 19:31:45.136693 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 2 19:31:45.136708 kernel: Rude variant of Tasks RCU enabled. Oct 2 19:31:45.136722 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:31:45.136736 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:31:45.136750 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 2 19:31:45.136764 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 2 19:31:45.136780 kernel: random: crng init done Oct 2 19:31:45.136794 kernel: Console: colour VGA+ 80x25 Oct 2 19:31:45.136808 kernel: printk: console [ttyS0] enabled Oct 2 19:31:45.136822 kernel: ACPI: Core revision 20210730 Oct 2 19:31:45.136836 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Oct 2 19:31:45.136850 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 19:31:45.136863 kernel: x2apic enabled Oct 2 19:31:45.136929 kernel: Switched APIC routing to physical x2apic. Oct 2 19:31:45.136946 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Oct 2 19:31:45.136963 kernel: Calibrating delay loop (skipped) preset value.. 5000.01 BogoMIPS (lpj=2500006) Oct 2 19:31:45.136977 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Oct 2 19:31:45.136990 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Oct 2 19:31:45.137004 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 19:31:45.137028 kernel: Spectre V2 : Mitigation: Retpolines Oct 2 19:31:45.137045 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 19:31:45.137059 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 2 19:31:45.137090 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Oct 2 19:31:45.137105 kernel: RETBleed: Vulnerable Oct 2 19:31:45.137120 kernel: Speculative Store Bypass: Vulnerable Oct 2 19:31:45.137134 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Oct 2 19:31:45.137149 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 2 19:31:45.137163 kernel: GDS: Unknown: Dependent on hypervisor status Oct 2 19:31:45.137178 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 2 19:31:45.137197 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 2 19:31:45.137211 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 2 19:31:45.137226 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Oct 2 19:31:45.137240 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Oct 2 19:31:45.137255 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Oct 2 19:31:45.137272 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Oct 2 19:31:45.137287 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Oct 2 19:31:45.137302 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Oct 2 19:31:45.137316 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 2 19:31:45.137331 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Oct 2 19:31:45.137346 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Oct 2 19:31:45.137360 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Oct 2 19:31:45.137375 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Oct 2 19:31:45.137389 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Oct 2 19:31:45.137403 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Oct 2 19:31:45.137418 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Oct 2 19:31:45.137433 kernel: Freeing SMP alternatives memory: 32K Oct 2 19:31:45.137450 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:31:45.137464 kernel: LSM: Security Framework initializing Oct 2 19:31:45.137479 kernel: SELinux: Initializing. Oct 2 19:31:45.137493 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 2 19:31:45.137508 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 2 19:31:45.137523 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Oct 2 19:31:45.137537 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Oct 2 19:31:45.137552 kernel: signal: max sigframe size: 3632 Oct 2 19:31:45.137567 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:31:45.137582 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 2 19:31:45.137599 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:31:45.137614 kernel: x86: Booting SMP configuration: Oct 2 19:31:45.137628 kernel: .... node #0, CPUs: #1 Oct 2 19:31:45.137643 kernel: kvm-clock: cpu 1, msr 71f8a041, secondary cpu clock Oct 2 19:31:45.137658 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Oct 2 19:31:45.137673 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Oct 2 19:31:45.137689 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Oct 2 19:31:45.137704 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 19:31:45.137719 kernel: smpboot: Max logical packages: 1 Oct 2 19:31:45.137736 kernel: smpboot: Total of 2 processors activated (10000.02 BogoMIPS) Oct 2 19:31:45.137750 kernel: devtmpfs: initialized Oct 2 19:31:45.137765 kernel: x86/mm: Memory block size: 128MB Oct 2 19:31:45.137780 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:31:45.137795 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 2 19:31:45.137810 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:31:45.137825 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:31:45.137839 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:31:45.137853 kernel: audit: type=2000 audit(1696275104.297:1): state=initialized audit_enabled=0 res=1 Oct 2 19:31:45.137871 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:31:45.137886 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 19:31:45.137901 kernel: cpuidle: using governor menu Oct 2 19:31:45.137917 kernel: ACPI: bus type PCI registered Oct 2 19:31:45.137932 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:31:45.137947 kernel: dca service started, version 1.12.1 Oct 2 19:31:45.138056 kernel: PCI: Using configuration type 1 for base access Oct 2 19:31:45.138094 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 19:31:45.138108 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:31:45.138126 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:31:45.138141 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:31:45.138157 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:31:45.138171 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:31:45.138185 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:31:45.138201 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:31:45.138216 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:31:45.138231 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:31:45.138246 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Oct 2 19:31:45.138264 kernel: ACPI: Interpreter enabled Oct 2 19:31:45.138279 kernel: ACPI: PM: (supports S0 S5) Oct 2 19:31:45.138294 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 19:31:45.138310 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 19:31:45.138325 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Oct 2 19:31:45.138340 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:31:45.138537 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:31:45.138669 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Oct 2 19:31:45.138692 kernel: acpiphp: Slot [3] registered Oct 2 19:31:45.138798 kernel: acpiphp: Slot [4] registered Oct 2 19:31:45.138854 kernel: acpiphp: Slot [5] registered Oct 2 19:31:45.138870 kernel: acpiphp: Slot [6] registered Oct 2 19:31:45.138885 kernel: acpiphp: Slot [7] registered Oct 2 19:31:45.138900 kernel: acpiphp: Slot [8] registered Oct 2 19:31:45.138915 kernel: acpiphp: Slot [9] registered Oct 2 19:31:45.138930 kernel: acpiphp: Slot [10] registered Oct 2 19:31:45.138946 kernel: acpiphp: Slot [11] registered Oct 2 19:31:45.138964 kernel: acpiphp: Slot [12] registered Oct 2 19:31:45.138979 kernel: acpiphp: Slot [13] registered Oct 2 19:31:45.138993 kernel: acpiphp: Slot [14] registered Oct 2 19:31:45.139008 kernel: acpiphp: Slot [15] registered Oct 2 19:31:45.139023 kernel: acpiphp: Slot [16] registered Oct 2 19:31:45.139038 kernel: acpiphp: Slot [17] registered Oct 2 19:31:45.139052 kernel: acpiphp: Slot [18] registered Oct 2 19:31:45.139067 kernel: acpiphp: Slot [19] registered Oct 2 19:31:45.139098 kernel: acpiphp: Slot [20] registered Oct 2 19:31:45.139114 kernel: acpiphp: Slot [21] registered Oct 2 19:31:45.139128 kernel: acpiphp: Slot [22] registered Oct 2 19:31:45.139142 kernel: acpiphp: Slot [23] registered Oct 2 19:31:45.139156 kernel: acpiphp: Slot [24] registered Oct 2 19:31:45.139169 kernel: acpiphp: Slot [25] registered Oct 2 19:31:45.139183 kernel: acpiphp: Slot [26] registered Oct 2 19:31:45.139197 kernel: acpiphp: Slot [27] registered Oct 2 19:31:45.139211 kernel: acpiphp: Slot [28] registered Oct 2 19:31:45.139226 kernel: acpiphp: Slot [29] registered Oct 2 19:31:45.139240 kernel: acpiphp: Slot [30] registered Oct 2 19:31:45.139257 kernel: acpiphp: Slot [31] registered Oct 2 19:31:45.139271 kernel: PCI host bridge to bus 0000:00 Oct 2 19:31:45.139465 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 19:31:45.139582 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 2 19:31:45.139695 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 19:31:45.139814 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Oct 2 19:31:45.139926 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:31:45.140225 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 2 19:31:45.140370 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Oct 2 19:31:45.140510 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Oct 2 19:31:45.140636 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Oct 2 19:31:45.140763 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Oct 2 19:31:45.140933 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Oct 2 19:31:45.141061 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Oct 2 19:31:45.141447 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Oct 2 19:31:45.141631 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Oct 2 19:31:45.141756 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Oct 2 19:31:45.141907 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Oct 2 19:31:45.142048 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x170 took 12695 usecs Oct 2 19:31:45.142205 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Oct 2 19:31:45.142342 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Oct 2 19:31:45.142475 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Oct 2 19:31:45.142600 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 2 19:31:45.142735 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Oct 2 19:31:45.142864 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Oct 2 19:31:45.143000 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Oct 2 19:31:45.143142 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Oct 2 19:31:45.143166 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 2 19:31:45.143181 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 2 19:31:45.143196 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 2 19:31:45.143212 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 2 19:31:45.143226 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 2 19:31:45.143241 kernel: iommu: Default domain type: Translated Oct 2 19:31:45.143256 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 19:31:45.143383 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Oct 2 19:31:45.143528 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 2 19:31:45.143661 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Oct 2 19:31:45.143680 kernel: vgaarb: loaded Oct 2 19:31:45.143695 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:31:45.143710 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Oct 2 19:31:45.143723 kernel: PTP clock support registered Oct 2 19:31:45.143737 kernel: PCI: Using ACPI for IRQ routing Oct 2 19:31:45.143752 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 19:31:45.143767 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 2 19:31:45.143784 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Oct 2 19:31:45.143808 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Oct 2 19:31:45.143823 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Oct 2 19:31:45.143839 kernel: clocksource: Switched to clocksource kvm-clock Oct 2 19:31:45.143854 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:31:45.143869 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:31:45.143884 kernel: pnp: PnP ACPI init Oct 2 19:31:45.143899 kernel: pnp: PnP ACPI: found 5 devices Oct 2 19:31:45.143914 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 19:31:45.143932 kernel: NET: Registered PF_INET protocol family Oct 2 19:31:45.143947 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:31:45.143962 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 2 19:31:45.143976 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:31:45.143991 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 2 19:31:45.144006 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Oct 2 19:31:45.144021 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 2 19:31:45.144036 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 2 19:31:45.144051 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 2 19:31:45.144068 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:31:45.144095 kernel: NET: Registered PF_XDP protocol family Oct 2 19:31:45.144221 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 2 19:31:45.144341 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 2 19:31:45.144455 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 2 19:31:45.144571 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Oct 2 19:31:45.144704 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 2 19:31:45.144837 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Oct 2 19:31:45.144860 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:31:45.144876 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 2 19:31:45.144891 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Oct 2 19:31:45.144906 kernel: clocksource: Switched to clocksource tsc Oct 2 19:31:45.144921 kernel: Initialise system trusted keyrings Oct 2 19:31:45.144936 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 2 19:31:45.144952 kernel: Key type asymmetric registered Oct 2 19:31:45.144967 kernel: Asymmetric key parser 'x509' registered Oct 2 19:31:45.144984 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:31:45.144999 kernel: io scheduler mq-deadline registered Oct 2 19:31:45.145015 kernel: io scheduler kyber registered Oct 2 19:31:45.145030 kernel: io scheduler bfq registered Oct 2 19:31:45.145045 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 19:31:45.145061 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:31:45.145087 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 19:31:45.145103 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 2 19:31:45.145118 kernel: i8042: Warning: Keylock active Oct 2 19:31:45.145136 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 2 19:31:45.145151 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 2 19:31:45.145285 kernel: rtc_cmos 00:00: RTC can wake from S4 Oct 2 19:31:45.145404 kernel: rtc_cmos 00:00: registered as rtc0 Oct 2 19:31:45.145523 kernel: rtc_cmos 00:00: setting system clock to 2023-10-02T19:31:44 UTC (1696275104) Oct 2 19:31:45.145640 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Oct 2 19:31:45.145658 kernel: intel_pstate: CPU model not supported Oct 2 19:31:45.145673 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:31:45.145691 kernel: Segment Routing with IPv6 Oct 2 19:31:45.145706 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:31:45.145721 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:31:45.145734 kernel: Key type dns_resolver registered Oct 2 19:31:45.145747 kernel: IPI shorthand broadcast: enabled Oct 2 19:31:45.145760 kernel: sched_clock: Marking stable (558188943, 317952372)->(1080523257, -204381942) Oct 2 19:31:45.145773 kernel: registered taskstats version 1 Oct 2 19:31:45.145785 kernel: Loading compiled-in X.509 certificates Oct 2 19:31:45.145802 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 19:31:45.145824 kernel: Key type .fscrypt registered Oct 2 19:31:45.145842 kernel: Key type fscrypt-provisioning registered Oct 2 19:31:45.145858 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:31:45.145872 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:31:45.145886 kernel: ima: No architecture policies found Oct 2 19:31:45.145901 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 19:31:45.145915 kernel: Write protecting the kernel read-only data: 28672k Oct 2 19:31:45.145930 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 19:31:45.145945 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 19:31:45.145962 kernel: Run /init as init process Oct 2 19:31:45.145977 kernel: with arguments: Oct 2 19:31:45.145992 kernel: /init Oct 2 19:31:45.146006 kernel: with environment: Oct 2 19:31:45.146020 kernel: HOME=/ Oct 2 19:31:45.146034 kernel: TERM=linux Oct 2 19:31:45.146048 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:31:45.146066 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:31:45.146132 systemd[1]: Detected virtualization amazon. Oct 2 19:31:45.146147 systemd[1]: Detected architecture x86-64. Oct 2 19:31:45.146162 systemd[1]: Running in initrd. Oct 2 19:31:45.146177 systemd[1]: No hostname configured, using default hostname. Oct 2 19:31:45.146206 systemd[1]: Hostname set to <localhost>. Oct 2 19:31:45.146228 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:31:45.146243 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 2 19:31:45.146259 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:31:45.146274 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:31:45.146289 systemd[1]: Reached target cryptsetup.target. Oct 2 19:31:45.146304 systemd[1]: Reached target paths.target. Oct 2 19:31:45.146320 systemd[1]: Reached target slices.target. Oct 2 19:31:45.146335 systemd[1]: Reached target swap.target. Oct 2 19:31:45.146351 systemd[1]: Reached target timers.target. Oct 2 19:31:45.146370 systemd[1]: Listening on iscsid.socket. Oct 2 19:31:45.146388 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:31:45.146403 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:31:45.146419 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:31:45.146434 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:31:45.146450 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:31:45.146466 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:31:45.146481 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:31:45.146499 systemd[1]: Reached target sockets.target. Oct 2 19:31:45.146515 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:31:45.146531 systemd[1]: Finished network-cleanup.service. Oct 2 19:31:45.146547 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:31:45.146562 systemd[1]: Starting systemd-journald.service... Oct 2 19:31:45.146578 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:31:45.146593 systemd[1]: Starting systemd-resolved.service... Oct 2 19:31:45.146615 systemd-journald[185]: Journal started Oct 2 19:31:45.146689 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2cf1b8948ae1ea374d615c40000b9f) is 4.8M, max 38.7M, 33.9M free. Oct 2 19:31:45.154103 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:31:45.152872 systemd-modules-load[186]: Inserted module 'overlay' Oct 2 19:31:45.318203 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:31:45.318241 kernel: Bridge firewalling registered Oct 2 19:31:45.318259 kernel: SCSI subsystem initialized Oct 2 19:31:45.318275 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:31:45.318297 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:31:45.318316 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:31:45.318332 systemd[1]: Started systemd-journald.service. Oct 2 19:31:45.188734 systemd-modules-load[186]: Inserted module 'br_netfilter' Oct 2 19:31:45.323859 kernel: audit: type=1130 audit(1696275105.317:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:45.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:45.239167 systemd-modules-load[186]: Inserted module 'dm_multipath' Oct 2 19:31:45.249985 systemd-resolved[187]: Positive Trust Anchors: Oct 2 19:31:45.332292 kernel: audit: type=1130 audit(1696275105.324:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:45.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:45.249997 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:31:45.340837 kernel: audit: type=1130 audit(1696275105.332:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:45.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:45.250047 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:31:45.354388 kernel: audit: type=1130 audit(1696275105.341:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:45.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:45.254369 systemd-resolved[187]: Defaulting to hostname 'linux'. Oct 2 19:31:45.318587 systemd[1]: Started systemd-resolved.service. Oct 2 19:31:45.363842 kernel: audit: type=1130 audit(1696275105.356:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:45.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:45.332329 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:31:45.333919 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:31:45.354563 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:31:45.365111 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:31:45.370532 systemd[1]: Reached target nss-lookup.target. Oct 2 19:31:45.378222 kernel: audit: type=1130 audit(1696275105.369:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:45.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:45.378279 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:31:45.379471 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:31:45.385053 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:31:45.407747 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:31:45.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:45.415098 kernel: audit: type=1130 audit(1696275105.406:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:45.418360 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:31:45.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:45.425093 kernel: audit: type=1130 audit(1696275105.417:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:45.447699 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:31:45.454497 kernel: audit: type=1130 audit(1696275105.447:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:45.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:45.455425 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:31:45.469187 dracut-cmdline[207]: dracut-dracut-053 Oct 2 19:31:45.474958 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:31:45.554171 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:31:45.571102 kernel: iscsi: registered transport (tcp) Oct 2 19:31:45.602439 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:31:45.602500 kernel: QLogic iSCSI HBA Driver Oct 2 19:31:45.645936 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:31:45.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:45.649662 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:31:45.710451 kernel: raid6: avx512x4 gen() 12988 MB/s Oct 2 19:31:45.731412 kernel: raid6: avx512x4 xor() 3903 MB/s Oct 2 19:31:45.748128 kernel: raid6: avx512x2 gen() 14169 MB/s Oct 2 19:31:45.765129 kernel: raid6: avx512x2 xor() 19579 MB/s Oct 2 19:31:45.782169 kernel: raid6: avx512x1 gen() 11567 MB/s Oct 2 19:31:45.800143 kernel: raid6: avx512x1 xor() 18234 MB/s Oct 2 19:31:45.818158 kernel: raid6: avx2x4 gen() 14528 MB/s Oct 2 19:31:45.836127 kernel: raid6: avx2x4 xor() 6435 MB/s Oct 2 19:31:45.854181 kernel: raid6: avx2x2 gen() 13873 MB/s Oct 2 19:31:45.873189 kernel: raid6: avx2x2 xor() 9829 MB/s Oct 2 19:31:45.891169 kernel: raid6: avx2x1 gen() 6492 MB/s Oct 2 19:31:45.909131 kernel: raid6: avx2x1 xor() 10957 MB/s Oct 2 19:31:45.927122 kernel: raid6: sse2x4 gen() 7945 MB/s Oct 2 19:31:45.946142 kernel: raid6: sse2x4 xor() 5057 MB/s Oct 2 19:31:45.964133 kernel: raid6: sse2x2 gen() 7605 MB/s Oct 2 19:31:45.982794 kernel: raid6: sse2x2 xor() 4051 MB/s Oct 2 19:31:45.999128 kernel: raid6: sse2x1 gen() 5468 MB/s Oct 2 19:31:46.020244 kernel: raid6: sse2x1 xor() 3514 MB/s Oct 2 19:31:46.020307 kernel: raid6: using algorithm avx2x4 gen() 14528 MB/s Oct 2 19:31:46.022277 kernel: raid6: .... xor() 6435 MB/s, rmw enabled Oct 2 19:31:46.022360 kernel: raid6: using avx512x2 recovery algorithm Oct 2 19:31:46.048138 kernel: xor: automatically using best checksumming function avx Oct 2 19:31:46.178106 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 19:31:46.190776 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:31:46.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:46.191000 audit: BPF prog-id=7 op=LOAD Oct 2 19:31:46.191000 audit: BPF prog-id=8 op=LOAD Oct 2 19:31:46.194132 systemd[1]: Starting systemd-udevd.service... Oct 2 19:31:46.220000 systemd-udevd[385]: Using default interface naming scheme 'v252'. Oct 2 19:31:46.232394 systemd[1]: Started systemd-udevd.service. Oct 2 19:31:46.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:46.236539 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:31:46.272125 dracut-pre-trigger[395]: rd.md=0: removing MD RAID activation Oct 2 19:31:46.315209 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:31:46.317004 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:31:46.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:46.379619 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:31:46.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:46.467110 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 2 19:31:46.467492 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:31:46.467514 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 2 19:31:46.481246 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Oct 2 19:31:46.485093 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:70:04:b7:9d:c5 Oct 2 19:31:46.486661 (udev-worker)[434]: Network interface NamePolicy= disabled on kernel command line. Oct 2 19:31:46.695994 kernel: AVX2 version of gcm_enc/dec engaged. Oct 2 19:31:46.696025 kernel: AES CTR mode by8 optimization enabled Oct 2 19:31:46.696050 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 2 19:31:46.696281 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Oct 2 19:31:46.696304 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 2 19:31:46.696456 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 2 19:31:46.696476 kernel: GPT:9289727 != 16777215 Oct 2 19:31:46.696496 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 2 19:31:46.696515 kernel: GPT:9289727 != 16777215 Oct 2 19:31:46.696539 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 2 19:31:46.696559 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:31:46.696581 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (426) Oct 2 19:31:46.660536 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:31:46.712189 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:31:46.717739 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:31:46.731033 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:31:46.734001 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:31:46.737951 systemd[1]: Starting disk-uuid.service... Oct 2 19:31:46.746060 disk-uuid[580]: Primary Header is updated. Oct 2 19:31:46.746060 disk-uuid[580]: Secondary Entries is updated. Oct 2 19:31:46.746060 disk-uuid[580]: Secondary Header is updated. Oct 2 19:31:46.754105 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:31:46.759106 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:31:46.766104 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:31:47.764232 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 2 19:31:47.764306 disk-uuid[581]: The operation has completed successfully. Oct 2 19:31:47.928888 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:31:47.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:47.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:47.929001 systemd[1]: Finished disk-uuid.service. Oct 2 19:31:47.937214 systemd[1]: Starting verity-setup.service... Oct 2 19:31:47.964141 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 19:31:48.048314 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:31:48.050351 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:31:48.053914 systemd[1]: Finished verity-setup.service. Oct 2 19:31:48.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:48.170115 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:31:48.170026 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:31:48.171287 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:31:48.172179 systemd[1]: Starting ignition-setup.service... Oct 2 19:31:48.178486 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:31:48.195526 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:31:48.195609 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:31:48.195629 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:31:48.209115 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:31:48.231656 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:31:48.248050 systemd[1]: Finished ignition-setup.service. Oct 2 19:31:48.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:48.252990 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:31:48.347392 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:31:48.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:48.349000 audit: BPF prog-id=9 op=LOAD Oct 2 19:31:48.351504 systemd[1]: Starting systemd-networkd.service... Oct 2 19:31:48.384784 systemd-networkd[1092]: lo: Link UP Oct 2 19:31:48.384798 systemd-networkd[1092]: lo: Gained carrier Oct 2 19:31:48.389347 systemd-networkd[1092]: Enumeration completed Oct 2 19:31:48.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:48.389491 systemd[1]: Started systemd-networkd.service. Oct 2 19:31:48.390882 systemd[1]: Reached target network.target. Oct 2 19:31:48.392177 systemd-networkd[1092]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:31:48.393925 systemd[1]: Starting iscsiuio.service... Oct 2 19:31:48.405902 systemd[1]: Started iscsiuio.service. Oct 2 19:31:48.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:48.407382 systemd[1]: Starting iscsid.service... Oct 2 19:31:48.416113 iscsid[1097]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:31:48.416113 iscsid[1097]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier]. Oct 2 19:31:48.416113 iscsid[1097]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:31:48.416113 iscsid[1097]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:31:48.416113 iscsid[1097]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:31:48.416113 iscsid[1097]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:31:48.415419 systemd-networkd[1092]: eth0: Link UP Oct 2 19:31:48.415426 systemd-networkd[1092]: eth0: Gained carrier Oct 2 19:31:48.432285 systemd[1]: Started iscsid.service. Oct 2 19:31:48.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:48.437150 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:31:48.445201 systemd-networkd[1092]: eth0: DHCPv4 address 172.31.22.219/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 19:31:48.455329 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:31:48.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:48.455525 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:31:48.458495 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:31:48.460623 systemd[1]: Reached target remote-fs.target. Oct 2 19:31:48.463770 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:31:48.474110 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:31:48.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:48.946250 ignition[1033]: Ignition 2.14.0 Oct 2 19:31:48.946270 ignition[1033]: Stage: fetch-offline Oct 2 19:31:48.946424 ignition[1033]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:31:48.946466 ignition[1033]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:31:48.964450 ignition[1033]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:31:48.966748 ignition[1033]: Ignition finished successfully Oct 2 19:31:48.968986 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:31:48.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:48.977427 systemd[1]: Starting ignition-fetch.service... Oct 2 19:31:48.994529 ignition[1116]: Ignition 2.14.0 Oct 2 19:31:48.994542 ignition[1116]: Stage: fetch Oct 2 19:31:48.994800 ignition[1116]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:31:48.994835 ignition[1116]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:31:49.012034 ignition[1116]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:31:49.013661 ignition[1116]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:31:49.030798 ignition[1116]: INFO : PUT result: OK Oct 2 19:31:49.043724 ignition[1116]: DEBUG : parsed url from cmdline: "" Oct 2 19:31:49.043724 ignition[1116]: INFO : no config URL provided Oct 2 19:31:49.043724 ignition[1116]: INFO : reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:31:49.063480 ignition[1116]: INFO : no config at "/usr/lib/ignition/user.ign" Oct 2 19:31:49.063480 ignition[1116]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:31:49.067840 ignition[1116]: INFO : PUT result: OK Oct 2 19:31:49.067840 ignition[1116]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 2 19:31:49.070229 ignition[1116]: INFO : GET result: OK Oct 2 19:31:49.070229 ignition[1116]: DEBUG : parsing config with SHA512: c9f0a611bb0182c1a049f14de7006a201a850419864a619bbe16264fab8b61ce276a33f8c5572f641f8b12a92b20a6c17d5cc3131c97106e32a6888ab9d590a6 Oct 2 19:31:49.088410 unknown[1116]: fetched base config from "system" Oct 2 19:31:49.088424 unknown[1116]: fetched base config from "system" Oct 2 19:31:49.088433 unknown[1116]: fetched user config from "aws" Oct 2 19:31:49.091929 ignition[1116]: fetch: fetch complete Oct 2 19:31:49.091943 ignition[1116]: fetch: fetch passed Oct 2 19:31:49.092011 ignition[1116]: Ignition finished successfully Oct 2 19:31:49.095121 systemd[1]: Finished ignition-fetch.service. Oct 2 19:31:49.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:49.102706 systemd[1]: Starting ignition-kargs.service... Oct 2 19:31:49.112024 kernel: kauditd_printk_skb: 19 callbacks suppressed Oct 2 19:31:49.112058 kernel: audit: type=1130 audit(1696275109.100:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:49.119918 ignition[1122]: Ignition 2.14.0 Oct 2 19:31:49.119931 ignition[1122]: Stage: kargs Oct 2 19:31:49.120152 ignition[1122]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:31:49.120182 ignition[1122]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:31:49.127273 ignition[1122]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:31:49.128925 ignition[1122]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:31:49.130820 ignition[1122]: INFO : PUT result: OK Oct 2 19:31:49.133589 ignition[1122]: kargs: kargs passed Oct 2 19:31:49.133652 ignition[1122]: Ignition finished successfully Oct 2 19:31:49.134839 systemd[1]: Finished ignition-kargs.service. Oct 2 19:31:49.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:49.138700 systemd[1]: Starting ignition-disks.service... Oct 2 19:31:49.144712 kernel: audit: type=1130 audit(1696275109.136:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:49.150129 ignition[1128]: Ignition 2.14.0 Oct 2 19:31:49.150142 ignition[1128]: Stage: disks Oct 2 19:31:49.150355 ignition[1128]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:31:49.150389 ignition[1128]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:31:49.160369 ignition[1128]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:31:49.161803 ignition[1128]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:31:49.163758 ignition[1128]: INFO : PUT result: OK Oct 2 19:31:49.166863 ignition[1128]: disks: disks passed Oct 2 19:31:49.166932 ignition[1128]: Ignition finished successfully Oct 2 19:31:49.169044 systemd[1]: Finished ignition-disks.service. Oct 2 19:31:49.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:49.170888 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:31:49.175094 kernel: audit: type=1130 audit(1696275109.169:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:49.176987 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:31:49.177105 systemd[1]: Reached target local-fs.target. Oct 2 19:31:49.180408 systemd[1]: Reached target sysinit.target. Oct 2 19:31:49.181315 systemd[1]: Reached target basic.target. Oct 2 19:31:49.182247 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:31:49.221147 systemd-fsck[1136]: ROOT: clean, 603/553520 files, 56012/553472 blocks Oct 2 19:31:49.228478 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:31:49.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:49.231936 systemd[1]: Mounting sysroot.mount... Oct 2 19:31:49.238472 kernel: audit: type=1130 audit(1696275109.229:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:49.250109 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:31:49.257224 systemd[1]: Mounted sysroot.mount. Oct 2 19:31:49.276389 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:31:49.288974 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:31:49.291422 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:31:49.291498 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:31:49.291537 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:31:49.304390 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:31:49.316452 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:31:49.320981 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:31:49.336106 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1153) Oct 2 19:31:49.339006 initrd-setup-root[1158]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:31:49.344715 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:31:49.344740 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:31:49.344752 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:31:49.357441 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:31:49.364453 initrd-setup-root[1184]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:31:49.364541 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:31:49.374568 initrd-setup-root[1192]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:31:49.384597 initrd-setup-root[1200]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:31:49.565174 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:31:49.578126 kernel: audit: type=1130 audit(1696275109.564:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:49.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:49.567978 systemd[1]: Starting ignition-mount.service... Oct 2 19:31:49.581633 systemd[1]: Starting sysroot-boot.service... Oct 2 19:31:49.588918 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Oct 2 19:31:49.589045 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Oct 2 19:31:49.619212 ignition[1219]: INFO : Ignition 2.14.0 Oct 2 19:31:49.619212 ignition[1219]: INFO : Stage: mount Oct 2 19:31:49.622298 ignition[1219]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:31:49.622298 ignition[1219]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:31:49.637008 systemd[1]: Finished sysroot-boot.service. Oct 2 19:31:49.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:49.647104 kernel: audit: type=1130 audit(1696275109.639:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:49.648605 ignition[1219]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:31:49.653588 ignition[1219]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:31:49.655961 ignition[1219]: INFO : PUT result: OK Oct 2 19:31:49.659847 ignition[1219]: INFO : mount: mount passed Oct 2 19:31:49.661034 ignition[1219]: INFO : Ignition finished successfully Oct 2 19:31:49.662868 systemd[1]: Finished ignition-mount.service. Oct 2 19:31:49.670391 kernel: audit: type=1130 audit(1696275109.662:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:49.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:49.665183 systemd[1]: Starting ignition-files.service... Oct 2 19:31:49.675807 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:31:49.689103 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1228) Oct 2 19:31:49.692429 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:31:49.692603 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 2 19:31:49.692622 kernel: BTRFS info (device nvme0n1p6): has skinny extents Oct 2 19:31:49.700101 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 2 19:31:49.702201 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:31:49.715235 ignition[1247]: INFO : Ignition 2.14.0 Oct 2 19:31:49.715235 ignition[1247]: INFO : Stage: files Oct 2 19:31:49.718162 ignition[1247]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:31:49.718162 ignition[1247]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:31:49.730424 ignition[1247]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:31:49.732136 ignition[1247]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:31:49.733831 ignition[1247]: INFO : PUT result: OK Oct 2 19:31:49.738271 ignition[1247]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:31:49.742822 ignition[1247]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:31:49.742822 ignition[1247]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:31:49.761090 ignition[1247]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:31:49.765193 ignition[1247]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:31:49.767664 unknown[1247]: wrote ssh authorized keys file for user: core Oct 2 19:31:49.769107 ignition[1247]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:31:49.771420 ignition[1247]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Oct 2 19:31:49.773795 ignition[1247]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Oct 2 19:31:49.928913 ignition[1247]: INFO : GET result: OK Oct 2 19:31:50.244034 ignition[1247]: DEBUG : file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Oct 2 19:31:50.247688 ignition[1247]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Oct 2 19:31:50.247688 ignition[1247]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Oct 2 19:31:50.247688 ignition[1247]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Oct 2 19:31:50.300251 systemd-networkd[1092]: eth0: Gained IPv6LL Oct 2 19:31:50.342164 ignition[1247]: INFO : GET result: OK Oct 2 19:31:50.501415 ignition[1247]: DEBUG : file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Oct 2 19:31:50.504915 ignition[1247]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Oct 2 19:31:50.504915 ignition[1247]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 19:31:50.504915 ignition[1247]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:31:50.521693 ignition[1247]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1168043853" Oct 2 19:31:50.527586 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1247) Oct 2 19:31:50.527621 ignition[1247]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1168043853": device or resource busy Oct 2 19:31:50.527621 ignition[1247]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1168043853", trying btrfs: device or resource busy Oct 2 19:31:50.527621 ignition[1247]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1168043853" Oct 2 19:31:50.543070 ignition[1247]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1168043853" Oct 2 19:31:50.555239 ignition[1247]: INFO : op(3): [started] unmounting "/mnt/oem1168043853" Oct 2 19:31:50.556862 systemd[1]: mnt-oem1168043853.mount: Deactivated successfully. Oct 2 19:31:50.559157 ignition[1247]: INFO : op(3): [finished] unmounting "/mnt/oem1168043853" Oct 2 19:31:50.560911 ignition[1247]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Oct 2 19:31:50.560911 ignition[1247]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:31:50.560911 ignition[1247]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Oct 2 19:31:50.650464 ignition[1247]: INFO : GET result: OK Oct 2 19:31:52.219541 ignition[1247]: DEBUG : file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Oct 2 19:31:52.223533 ignition[1247]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:31:52.223533 ignition[1247]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:31:52.223533 ignition[1247]: INFO : GET https://storage.googleapis.com/kubernetes-release/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Oct 2 19:31:52.281923 ignition[1247]: INFO : GET result: OK Oct 2 19:31:54.612722 ignition[1247]: DEBUG : file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Oct 2 19:31:54.617180 ignition[1247]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:31:54.617180 ignition[1247]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:31:54.617180 ignition[1247]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:31:54.617180 ignition[1247]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:31:54.617180 ignition[1247]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:31:54.617180 ignition[1247]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 19:31:54.617180 ignition[1247]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:31:54.637121 ignition[1247]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1338963075" Oct 2 19:31:54.637121 ignition[1247]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1338963075": device or resource busy Oct 2 19:31:54.637121 ignition[1247]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1338963075", trying btrfs: device or resource busy Oct 2 19:31:54.637121 ignition[1247]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1338963075" Oct 2 19:31:54.637121 ignition[1247]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1338963075" Oct 2 19:31:54.637121 ignition[1247]: INFO : op(6): [started] unmounting "/mnt/oem1338963075" Oct 2 19:31:54.637121 ignition[1247]: INFO : op(6): [finished] unmounting "/mnt/oem1338963075" Oct 2 19:31:54.637121 ignition[1247]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Oct 2 19:31:54.637121 ignition[1247]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 19:31:54.637121 ignition[1247]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:31:54.634473 systemd[1]: mnt-oem1338963075.mount: Deactivated successfully. Oct 2 19:31:54.664833 ignition[1247]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3763972151" Oct 2 19:31:54.666837 ignition[1247]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3763972151": device or resource busy Oct 2 19:31:54.666837 ignition[1247]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3763972151", trying btrfs: device or resource busy Oct 2 19:31:54.666837 ignition[1247]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3763972151" Oct 2 19:31:54.681687 ignition[1247]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3763972151" Oct 2 19:31:54.681687 ignition[1247]: INFO : op(9): [started] unmounting "/mnt/oem3763972151" Oct 2 19:31:54.681687 ignition[1247]: INFO : op(9): [finished] unmounting "/mnt/oem3763972151" Oct 2 19:31:54.681687 ignition[1247]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Oct 2 19:31:54.681687 ignition[1247]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:31:54.681687 ignition[1247]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:31:54.699879 ignition[1247]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4202311578" Oct 2 19:31:54.699879 ignition[1247]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4202311578": device or resource busy Oct 2 19:31:54.699879 ignition[1247]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4202311578", trying btrfs: device or resource busy Oct 2 19:31:54.699879 ignition[1247]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4202311578" Oct 2 19:31:54.712454 ignition[1247]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4202311578" Oct 2 19:31:54.712454 ignition[1247]: INFO : op(c): [started] unmounting "/mnt/oem4202311578" Oct 2 19:31:54.712454 ignition[1247]: INFO : op(c): [finished] unmounting "/mnt/oem4202311578" Oct 2 19:31:54.723130 ignition[1247]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Oct 2 19:31:54.723130 ignition[1247]: INFO : files: op(d): [started] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:31:54.723130 ignition[1247]: INFO : files: op(d): [finished] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:31:54.723130 ignition[1247]: INFO : files: op(e): [started] processing unit "amazon-ssm-agent.service" Oct 2 19:31:54.723130 ignition[1247]: INFO : files: op(e): op(f): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 19:31:54.723130 ignition[1247]: INFO : files: op(e): op(f): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Oct 2 19:31:54.723130 ignition[1247]: INFO : files: op(e): [finished] processing unit "amazon-ssm-agent.service" Oct 2 19:31:54.723130 ignition[1247]: INFO : files: op(10): [started] processing unit "nvidia.service" Oct 2 19:31:54.723130 ignition[1247]: INFO : files: op(10): [finished] processing unit "nvidia.service" Oct 2 19:31:54.723130 ignition[1247]: INFO : files: op(11): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:31:54.723130 ignition[1247]: INFO : files: op(11): op(12): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:31:54.720652 systemd[1]: mnt-oem4202311578.mount: Deactivated successfully. Oct 2 19:31:54.787617 ignition[1247]: INFO : files: op(11): op(12): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:31:54.787617 ignition[1247]: INFO : files: op(11): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:31:54.787617 ignition[1247]: INFO : files: op(13): [started] processing unit "prepare-critools.service" Oct 2 19:31:54.787617 ignition[1247]: INFO : files: op(13): op(14): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:31:54.787617 ignition[1247]: INFO : files: op(13): op(14): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:31:54.787617 ignition[1247]: INFO : files: op(13): [finished] processing unit "prepare-critools.service" Oct 2 19:31:54.787617 ignition[1247]: INFO : files: op(15): [started] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 19:31:54.787617 ignition[1247]: INFO : files: op(15): [finished] setting preset to enabled for "amazon-ssm-agent.service" Oct 2 19:31:54.787617 ignition[1247]: INFO : files: op(16): [started] setting preset to enabled for "nvidia.service" Oct 2 19:31:54.787617 ignition[1247]: INFO : files: op(16): [finished] setting preset to enabled for "nvidia.service" Oct 2 19:31:54.787617 ignition[1247]: INFO : files: op(17): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:31:54.787617 ignition[1247]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:31:54.787617 ignition[1247]: INFO : files: op(18): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:31:54.787617 ignition[1247]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:31:54.787617 ignition[1247]: INFO : files: op(19): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:31:54.787617 ignition[1247]: INFO : files: op(19): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:31:54.837155 ignition[1247]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:31:54.837155 ignition[1247]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:31:54.837155 ignition[1247]: INFO : files: files passed Oct 2 19:31:54.837155 ignition[1247]: INFO : Ignition finished successfully Oct 2 19:31:54.850218 systemd[1]: Finished ignition-files.service. Oct 2 19:31:54.862239 kernel: audit: type=1130 audit(1696275114.849:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:54.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:54.856935 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:31:54.863496 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:31:54.869663 systemd[1]: Starting ignition-quench.service... Oct 2 19:31:54.873116 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:31:54.873210 systemd[1]: Finished ignition-quench.service. Oct 2 19:31:54.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:54.885051 initrd-setup-root-after-ignition[1272]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:31:54.904347 kernel: audit: type=1130 audit(1696275114.882:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:54.904398 kernel: audit: type=1131 audit(1696275114.883:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:54.904423 kernel: audit: type=1130 audit(1696275114.893:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:54.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:54.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:54.886211 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:31:54.894460 systemd[1]: Reached target ignition-complete.target. Oct 2 19:31:54.905485 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:31:54.928059 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:31:54.928193 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:31:54.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:54.931809 systemd[1]: Reached target initrd-fs.target. Oct 2 19:31:54.943647 kernel: audit: type=1130 audit(1696275114.929:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:54.943692 kernel: audit: type=1131 audit(1696275114.930:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:54.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:54.945403 systemd[1]: Reached target initrd.target. Oct 2 19:31:54.947363 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:31:54.950477 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:31:54.969192 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:31:54.971716 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:31:54.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:54.981105 kernel: audit: type=1130 audit(1696275114.969:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:54.990841 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:31:54.991149 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:31:54.996229 systemd[1]: Stopped target timers.target. Oct 2 19:31:54.998895 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:31:55.000252 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:31:55.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.002633 systemd[1]: Stopped target initrd.target. Oct 2 19:31:55.012116 kernel: audit: type=1131 audit(1696275115.001:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.012220 systemd[1]: Stopped target basic.target. Oct 2 19:31:55.014246 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:31:55.016792 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:31:55.019107 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:31:55.019493 systemd[1]: Stopped target remote-fs.target. Oct 2 19:31:55.024063 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:31:55.026485 systemd[1]: Stopped target sysinit.target. Oct 2 19:31:55.028628 systemd[1]: Stopped target local-fs.target. Oct 2 19:31:55.030770 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:31:55.032793 systemd[1]: Stopped target swap.target. Oct 2 19:31:55.035196 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:31:55.036996 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:31:55.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.039719 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:31:55.045883 kernel: audit: type=1131 audit(1696275115.038:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.045959 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:31:55.047153 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:31:55.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.049659 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:31:55.049895 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:31:55.057852 kernel: audit: type=1131 audit(1696275115.048:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.058027 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:31:55.059234 systemd[1]: Stopped ignition-files.service. Oct 2 19:31:55.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.067133 iscsid[1097]: iscsid shutting down. Oct 2 19:31:55.062263 systemd[1]: Stopping ignition-mount.service... Oct 2 19:31:55.067220 systemd[1]: Stopping iscsid.service... Oct 2 19:31:55.073018 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:31:55.075144 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:31:55.076580 ignition[1285]: INFO : Ignition 2.14.0 Oct 2 19:31:55.076580 ignition[1285]: INFO : Stage: umount Oct 2 19:31:55.076580 ignition[1285]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:31:55.076580 ignition[1285]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Oct 2 19:31:55.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.076720 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:31:55.084774 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:31:55.087825 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:31:55.089893 ignition[1285]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 2 19:31:55.089893 ignition[1285]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 2 19:31:55.093173 ignition[1285]: INFO : PUT result: OK Oct 2 19:31:55.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.098523 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:31:55.100142 systemd[1]: Stopped iscsid.service. Oct 2 19:31:55.102155 ignition[1285]: INFO : umount: umount passed Oct 2 19:31:55.103284 ignition[1285]: INFO : Ignition finished successfully Oct 2 19:31:55.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.106507 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:31:55.106639 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:31:55.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.112433 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:31:55.112607 systemd[1]: Stopped ignition-mount.service. Oct 2 19:31:55.115647 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:31:55.115894 systemd[1]: Stopped ignition-disks.service. Oct 2 19:31:55.119797 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:31:55.119871 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:31:55.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.130856 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 2 19:31:55.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.130931 systemd[1]: Stopped ignition-fetch.service. Oct 2 19:31:55.133671 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:31:55.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.133738 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:31:55.136579 systemd[1]: Stopped target paths.target. Oct 2 19:31:55.136665 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:31:55.137493 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:31:55.141311 systemd[1]: Stopped target slices.target. Oct 2 19:31:55.142205 systemd[1]: Stopped target sockets.target. Oct 2 19:31:55.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.149314 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:31:55.149377 systemd[1]: Closed iscsid.socket. Oct 2 19:31:55.152766 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:31:55.152842 systemd[1]: Stopped ignition-setup.service. Oct 2 19:31:55.161296 systemd[1]: Stopping iscsiuio.service... Oct 2 19:31:55.172706 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:31:55.174176 systemd[1]: Stopped iscsiuio.service. Oct 2 19:31:55.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.176440 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:31:55.177569 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:31:55.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.179988 systemd[1]: Stopped target network.target. Oct 2 19:31:55.182147 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:31:55.182194 systemd[1]: Closed iscsiuio.socket. Oct 2 19:31:55.183635 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:31:55.183686 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:31:55.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.191209 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:31:55.192600 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:31:55.197432 systemd-networkd[1092]: eth0: DHCPv6 lease lost Oct 2 19:31:55.200176 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:31:55.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.200276 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:31:55.206000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:31:55.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.204991 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:31:55.205109 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:31:55.209000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:31:55.211502 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:31:55.211564 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:31:55.217842 systemd[1]: Stopping network-cleanup.service... Oct 2 19:31:55.223476 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:31:55.223557 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:31:55.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.225928 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:31:55.225982 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:31:55.231483 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:31:55.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.231543 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:31:55.233037 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:31:55.239428 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:31:55.239795 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:31:55.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.244742 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:31:55.246174 systemd[1]: Stopped network-cleanup.service. Oct 2 19:31:55.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.248270 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:31:55.248319 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:31:55.251841 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:31:55.253278 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:31:55.254408 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:31:55.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.254454 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:31:55.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.256411 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:31:55.256456 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:31:55.257662 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:31:55.257705 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:31:55.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.266162 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:31:55.267792 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:31:55.267860 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:31:55.269383 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:31:55.269428 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:31:55.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:55.270586 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:31:55.270624 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:31:55.278250 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:31:55.278350 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:31:55.280760 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:31:55.287164 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:31:55.310277 systemd[1]: Switching root. Oct 2 19:31:55.339566 systemd-journald[185]: Journal stopped Oct 2 19:32:01.315465 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Oct 2 19:32:01.315551 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:32:01.315572 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:32:01.315596 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:32:01.315618 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:32:01.315636 kernel: SELinux: policy capability open_perms=1 Oct 2 19:32:01.315653 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:32:01.315671 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:32:01.315689 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:32:01.315724 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:32:01.315742 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:32:01.315760 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:32:01.315777 systemd[1]: Successfully loaded SELinux policy in 98.879ms. Oct 2 19:32:01.315809 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.006ms. Oct 2 19:32:01.315830 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:32:01.315853 systemd[1]: Detected virtualization amazon. Oct 2 19:32:01.315871 systemd[1]: Detected architecture x86-64. Oct 2 19:32:01.315889 systemd[1]: Detected first boot. Oct 2 19:32:01.315911 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:32:01.315930 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:32:01.315955 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:32:01.315985 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:32:01.316005 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:32:01.316023 kernel: kauditd_printk_skb: 40 callbacks suppressed Oct 2 19:32:01.316062 kernel: audit: type=1334 audit(1696275120.513:87): prog-id=12 op=LOAD Oct 2 19:32:01.316705 kernel: audit: type=1334 audit(1696275120.513:88): prog-id=3 op=UNLOAD Oct 2 19:32:01.316741 kernel: audit: type=1334 audit(1696275120.516:89): prog-id=13 op=LOAD Oct 2 19:32:01.316760 kernel: audit: type=1334 audit(1696275120.518:90): prog-id=14 op=LOAD Oct 2 19:32:01.316777 kernel: audit: type=1334 audit(1696275120.518:91): prog-id=4 op=UNLOAD Oct 2 19:32:01.316796 kernel: audit: type=1334 audit(1696275120.518:92): prog-id=5 op=UNLOAD Oct 2 19:32:01.316813 kernel: audit: type=1334 audit(1696275120.520:93): prog-id=15 op=LOAD Oct 2 19:32:01.316832 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:32:01.316854 kernel: audit: type=1334 audit(1696275120.520:94): prog-id=12 op=UNLOAD Oct 2 19:32:01.316876 kernel: audit: type=1334 audit(1696275120.524:95): prog-id=16 op=LOAD Oct 2 19:32:01.316894 kernel: audit: type=1334 audit(1696275120.526:96): prog-id=17 op=LOAD Oct 2 19:32:01.316911 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:32:01.316931 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:32:01.316950 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:32:01.316971 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:32:01.316989 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 2 19:32:01.317008 systemd[1]: Created slice system-getty.slice. Oct 2 19:32:01.317030 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:32:01.317050 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:32:01.317069 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:32:01.317109 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:32:01.317129 systemd[1]: Created slice user.slice. Oct 2 19:32:01.317148 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:32:01.317168 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:32:01.317188 systemd[1]: Set up automount boot.automount. Oct 2 19:32:01.317205 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:32:01.317228 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:32:01.317248 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:32:01.317266 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:32:01.317285 systemd[1]: Reached target integritysetup.target. Oct 2 19:32:01.317306 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:32:01.317334 systemd[1]: Reached target remote-fs.target. Oct 2 19:32:01.317354 systemd[1]: Reached target slices.target. Oct 2 19:32:01.317375 systemd[1]: Reached target swap.target. Oct 2 19:32:01.317397 systemd[1]: Reached target torcx.target. Oct 2 19:32:01.317420 systemd[1]: Reached target veritysetup.target. Oct 2 19:32:01.317440 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:32:01.317460 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:32:01.317482 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:32:01.317501 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:32:01.317522 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:32:01.317541 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:32:01.317562 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:32:01.317596 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:32:01.317622 systemd[1]: Mounting media.mount... Oct 2 19:32:01.317644 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:32:01.317671 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:32:01.317691 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:32:01.317711 systemd[1]: Mounting tmp.mount... Oct 2 19:32:01.317736 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:32:01.317758 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:32:01.317783 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:32:01.317801 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:32:01.317821 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:32:01.317840 systemd[1]: Starting modprobe@drm.service... Oct 2 19:32:01.317862 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:32:01.317882 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:32:01.317902 systemd[1]: Starting modprobe@loop.service... Oct 2 19:32:01.317926 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:32:01.317946 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:32:01.317966 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:32:01.317986 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:32:01.318006 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:32:01.318026 systemd[1]: Stopped systemd-journald.service. Oct 2 19:32:01.318046 systemd[1]: Starting systemd-journald.service... Oct 2 19:32:01.318064 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:32:01.318178 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:32:01.318205 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:32:01.318228 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:32:01.318250 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:32:01.318276 systemd[1]: Stopped verity-setup.service. Oct 2 19:32:01.318298 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:32:01.318322 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:32:01.318344 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:32:01.318365 systemd[1]: Mounted media.mount. Oct 2 19:32:01.318389 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:32:01.318415 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:32:01.318438 systemd[1]: Mounted tmp.mount. Oct 2 19:32:01.318461 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:32:01.318484 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:32:01.318506 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:32:01.318530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:32:01.318551 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:32:01.318575 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:32:01.318599 systemd[1]: Finished modprobe@drm.service. Oct 2 19:32:01.318625 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:32:01.318649 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:32:01.318671 kernel: fuse: init (API version 7.34) Oct 2 19:32:01.318704 systemd-journald[1393]: Journal started Oct 2 19:32:01.318798 systemd-journald[1393]: Runtime Journal (/run/log/journal/ec2cf1b8948ae1ea374d615c40000b9f) is 4.8M, max 38.7M, 33.9M free. Oct 2 19:31:55.951000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:31:56.092000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:31:56.092000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:31:56.092000 audit: BPF prog-id=10 op=LOAD Oct 2 19:31:56.092000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:31:56.092000 audit: BPF prog-id=11 op=LOAD Oct 2 19:31:56.092000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:32:00.513000 audit: BPF prog-id=12 op=LOAD Oct 2 19:32:00.513000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:32:00.516000 audit: BPF prog-id=13 op=LOAD Oct 2 19:32:00.518000 audit: BPF prog-id=14 op=LOAD Oct 2 19:32:00.518000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:32:00.518000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:32:00.520000 audit: BPF prog-id=15 op=LOAD Oct 2 19:32:00.520000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:32:00.524000 audit: BPF prog-id=16 op=LOAD Oct 2 19:32:00.526000 audit: BPF prog-id=17 op=LOAD Oct 2 19:32:00.526000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:32:00.526000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:32:00.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:00.541000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:32:00.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:00.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:00.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:00.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:00.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:00.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:00.999000 audit: BPF prog-id=18 op=LOAD Oct 2 19:32:00.999000 audit: BPF prog-id=19 op=LOAD Oct 2 19:32:00.999000 audit: BPF prog-id=20 op=LOAD Oct 2 19:32:00.999000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:32:01.000000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:32:01.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:01.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:01.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:01.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:01.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:01.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:01.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:01.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:01.304000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:32:01.304000 audit[1393]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffef8051800 a2=4000 a3=7ffef805189c items=0 ppid=1 pid=1393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:01.304000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:32:01.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:01.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:00.509009 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:32:01.329347 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:31:56.281445 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:32:00.528898 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:31:56.282101 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:56Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:31:56.282131 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:56Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:31:56.282176 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:56Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:31:56.282193 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:56Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:32:01.334595 systemd[1]: Started systemd-journald.service. Oct 2 19:32:01.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:01.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:56.282237 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:56Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:32:01.334368 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:31:56.282258 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:56Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:31:56.282505 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:56Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:32:01.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:56.282558 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:56Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:31:56.282578 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:56Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:31:56.283531 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:56Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:31:56.283630 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:56Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:32:01.337516 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:31:56.283695 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:56Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:31:56.283720 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:56Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:31:56.283803 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:56Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:32:01.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:56.283826 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:56Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:32:01.339387 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:31:59.874576 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:59Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:31:59.874842 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:59Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:31:59.874950 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:59Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:31:59.875177 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:59Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:31:59.875229 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:59Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:31:59.875288 /usr/lib/systemd/system-generators/torcx-generator[1318]: time="2023-10-02T19:31:59Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:32:01.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:01.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:01.340834 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:32:01.343138 systemd[1]: Reached target network-pre.target. Oct 2 19:32:01.350100 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:32:01.360670 kernel: loop: module loaded Oct 2 19:32:01.353504 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:32:01.356244 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:32:01.360034 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:32:01.362971 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:32:01.364326 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:32:01.366182 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:32:01.374625 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:32:01.380699 systemd-journald[1393]: Time spent on flushing to /var/log/journal/ec2cf1b8948ae1ea374d615c40000b9f is 81.947ms for 1192 entries. Oct 2 19:32:01.380699 systemd-journald[1393]: System Journal (/var/log/journal/ec2cf1b8948ae1ea374d615c40000b9f) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:32:01.496429 systemd-journald[1393]: Received client request to flush runtime journal. Oct 2 19:32:01.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:01.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:01.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:01.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:01.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:01.386347 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:32:01.386519 systemd[1]: Finished modprobe@loop.service. Oct 2 19:32:01.388540 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:32:01.390036 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:32:01.391622 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:32:01.394889 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:32:01.396933 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:32:01.413837 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:32:01.473296 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:32:01.476293 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:32:01.500070 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:32:01.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:01.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:01.505531 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:32:01.515022 udevadm[1432]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 2 19:32:01.510287 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:32:01.637899 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:32:01.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:01.641229 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:32:01.740513 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:32:01.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:02.241198 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:32:02.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:02.242000 audit: BPF prog-id=21 op=LOAD Oct 2 19:32:02.242000 audit: BPF prog-id=22 op=LOAD Oct 2 19:32:02.242000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:32:02.242000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:32:02.244253 systemd[1]: Starting systemd-udevd.service... Oct 2 19:32:02.277061 systemd-udevd[1438]: Using default interface naming scheme 'v252'. Oct 2 19:32:02.353131 systemd[1]: Started systemd-udevd.service. Oct 2 19:32:02.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:02.355000 audit: BPF prog-id=23 op=LOAD Oct 2 19:32:02.357342 systemd[1]: Starting systemd-networkd.service... Oct 2 19:32:02.370000 audit: BPF prog-id=24 op=LOAD Oct 2 19:32:02.370000 audit: BPF prog-id=25 op=LOAD Oct 2 19:32:02.370000 audit: BPF prog-id=26 op=LOAD Oct 2 19:32:02.373526 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:32:02.459780 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:32:02.474823 systemd[1]: Started systemd-userdbd.service. Oct 2 19:32:02.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:02.553548 (udev-worker)[1452]: Network interface NamePolicy= disabled on kernel command line. Oct 2 19:32:02.623103 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 2 19:32:02.631769 systemd-networkd[1445]: lo: Link UP Oct 2 19:32:02.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:02.631782 systemd-networkd[1445]: lo: Gained carrier Oct 2 19:32:02.632439 systemd-networkd[1445]: Enumeration completed Oct 2 19:32:02.632566 systemd[1]: Started systemd-networkd.service. Oct 2 19:32:02.635426 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:32:02.637894 systemd-networkd[1445]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:32:02.642763 systemd-networkd[1445]: eth0: Link UP Oct 2 19:32:02.643123 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 2 19:32:02.643278 systemd-networkd[1445]: eth0: Gained carrier Oct 2 19:32:02.632000 audit[1448]: AVC avc: denied { confidentiality } for pid=1448 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:32:02.650090 kernel: ACPI: button: Power Button [PWRF] Oct 2 19:32:02.652122 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Oct 2 19:32:02.652365 systemd-networkd[1445]: eth0: DHCPv4 address 172.31.22.219/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 2 19:32:02.657102 kernel: ACPI: button: Sleep Button [SLPF] Oct 2 19:32:02.632000 audit[1448]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55aaf8e05b80 a1=32194 a2=7f28f13a4bc5 a3=5 items=106 ppid=1438 pid=1448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:02.632000 audit: CWD cwd="/" Oct 2 19:32:02.632000 audit: PATH item=0 name=(null) inode=14264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=1 name=(null) inode=14265 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=2 name=(null) inode=14264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=3 name=(null) inode=14266 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=4 name=(null) inode=14264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=5 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=6 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=7 name=(null) inode=14268 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=8 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=9 name=(null) inode=14269 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=10 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=11 name=(null) inode=14270 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=12 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=13 name=(null) inode=14271 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=14 name=(null) inode=14267 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=15 name=(null) inode=14272 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=16 name=(null) inode=14264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=17 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=18 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=19 name=(null) inode=14274 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=20 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=21 name=(null) inode=14275 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=22 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=23 name=(null) inode=14276 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=24 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=25 name=(null) inode=14277 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=26 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=27 name=(null) inode=14278 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=28 name=(null) inode=14264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=29 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=30 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=31 name=(null) inode=14280 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=32 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=33 name=(null) inode=14281 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=34 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=35 name=(null) inode=14282 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=36 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=37 name=(null) inode=14283 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=38 name=(null) inode=14279 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=39 name=(null) inode=14284 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=40 name=(null) inode=14264 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=41 name=(null) inode=14285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=42 name=(null) inode=14285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=43 name=(null) inode=14286 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=44 name=(null) inode=14285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=45 name=(null) inode=14287 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=46 name=(null) inode=14285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=47 name=(null) inode=14288 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=48 name=(null) inode=14285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=49 name=(null) inode=14289 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=50 name=(null) inode=14285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=51 name=(null) inode=14290 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=52 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=53 name=(null) inode=14291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=54 name=(null) inode=14291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=55 name=(null) inode=14292 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=56 name=(null) inode=14291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=57 name=(null) inode=14293 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=58 name=(null) inode=14291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=59 name=(null) inode=14294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=60 name=(null) inode=14294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=61 name=(null) inode=14295 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=62 name=(null) inode=14294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=63 name=(null) inode=14296 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=64 name=(null) inode=14294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=65 name=(null) inode=14297 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=66 name=(null) inode=14294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=67 name=(null) inode=14298 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=68 name=(null) inode=14294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=69 name=(null) inode=14299 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=70 name=(null) inode=14291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=71 name=(null) inode=14300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=72 name=(null) inode=14300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=73 name=(null) inode=14301 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=74 name=(null) inode=14300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=75 name=(null) inode=14302 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=76 name=(null) inode=14300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=77 name=(null) inode=14303 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=78 name=(null) inode=14300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=79 name=(null) inode=14304 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=80 name=(null) inode=14300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=81 name=(null) inode=14305 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=82 name=(null) inode=14291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=83 name=(null) inode=14306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=84 name=(null) inode=14306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=85 name=(null) inode=14307 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=86 name=(null) inode=14306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=87 name=(null) inode=14308 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=88 name=(null) inode=14306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=89 name=(null) inode=14309 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=90 name=(null) inode=14306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=91 name=(null) inode=14310 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=92 name=(null) inode=14306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=93 name=(null) inode=14311 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=94 name=(null) inode=14291 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=95 name=(null) inode=14312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=96 name=(null) inode=14312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=97 name=(null) inode=14313 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=98 name=(null) inode=14312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=99 name=(null) inode=14314 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=100 name=(null) inode=14312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=101 name=(null) inode=14315 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=102 name=(null) inode=14312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=103 name=(null) inode=14316 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=104 name=(null) inode=14312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PATH item=105 name=(null) inode=14317 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:32:02.632000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 19:32:02.665131 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Oct 2 19:32:02.714103 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Oct 2 19:32:02.720130 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 19:32:02.765109 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1441) Oct 2 19:32:02.861046 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:32:02.935561 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:32:02.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:02.940223 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:32:02.975913 lvm[1552]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:32:03.000433 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:32:03.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:03.002526 systemd[1]: Reached target cryptsetup.target. Oct 2 19:32:03.005008 systemd[1]: Starting lvm2-activation.service... Oct 2 19:32:03.010653 lvm[1553]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:32:03.043442 systemd[1]: Finished lvm2-activation.service. Oct 2 19:32:03.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:03.044676 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:32:03.051624 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:32:03.051676 systemd[1]: Reached target local-fs.target. Oct 2 19:32:03.052876 systemd[1]: Reached target machines.target. Oct 2 19:32:03.055239 systemd[1]: Starting ldconfig.service... Oct 2 19:32:03.059143 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:32:03.059239 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:32:03.061480 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:32:03.067132 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:32:03.080891 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:32:03.082834 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:32:03.082940 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:32:03.085406 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:32:03.101818 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1555 (bootctl) Oct 2 19:32:03.104438 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:32:03.108698 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:32:03.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:03.134644 systemd-tmpfiles[1558]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:32:03.136110 systemd-tmpfiles[1558]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:32:03.140256 systemd-tmpfiles[1558]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:32:03.287684 systemd-fsck[1563]: fsck.fat 4.2 (2021-01-31) Oct 2 19:32:03.287684 systemd-fsck[1563]: /dev/nvme0n1p1: 789 files, 115069/258078 clusters Oct 2 19:32:03.289884 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:32:03.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:03.295062 systemd[1]: Mounting boot.mount... Oct 2 19:32:03.319494 systemd[1]: Mounted boot.mount. Oct 2 19:32:03.357650 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:32:03.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:03.488961 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:32:03.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:03.491606 systemd[1]: Starting audit-rules.service... Oct 2 19:32:03.494790 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:32:03.500875 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:32:03.503000 audit: BPF prog-id=27 op=LOAD Oct 2 19:32:03.508930 systemd[1]: Starting systemd-resolved.service... Oct 2 19:32:03.512000 audit: BPF prog-id=28 op=LOAD Oct 2 19:32:03.517420 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:32:03.524005 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:32:03.545769 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:32:03.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:03.548992 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:32:03.558000 audit[1583]: SYSTEM_BOOT pid=1583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:32:03.565929 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:32:03.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:03.706944 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:32:03.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:03.739750 systemd-resolved[1580]: Positive Trust Anchors: Oct 2 19:32:03.739775 systemd-resolved[1580]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:32:03.739909 systemd-resolved[1580]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:32:03.747000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:32:03.747000 audit[1598]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe7aa48010 a2=420 a3=0 items=0 ppid=1577 pid=1598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:03.747000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:32:03.760917 augenrules[1598]: No rules Oct 2 19:32:03.749664 systemd[1]: Finished audit-rules.service. Oct 2 19:32:03.757661 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:32:03.759042 systemd[1]: Reached target time-set.target. Oct 2 19:32:04.583773 systemd-timesyncd[1582]: Contacted time server 204.17.205.8:123 (0.flatcar.pool.ntp.org). Oct 2 19:32:04.583860 systemd-timesyncd[1582]: Initial clock synchronization to Mon 2023-10-02 19:32:04.583586 UTC. Oct 2 19:32:04.589331 systemd-resolved[1580]: Defaulting to hostname 'linux'. Oct 2 19:32:04.592334 systemd[1]: Started systemd-resolved.service. Oct 2 19:32:04.593687 systemd[1]: Reached target network.target. Oct 2 19:32:04.594887 systemd[1]: Reached target nss-lookup.target. Oct 2 19:32:04.602280 systemd-networkd[1445]: eth0: Gained IPv6LL Oct 2 19:32:04.604884 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:32:04.606329 systemd[1]: Reached target network-online.target. Oct 2 19:32:04.642338 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:32:04.643065 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:32:04.930262 ldconfig[1554]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:32:04.935099 systemd[1]: Finished ldconfig.service. Oct 2 19:32:04.937787 systemd[1]: Starting systemd-update-done.service... Oct 2 19:32:04.946428 systemd[1]: Finished systemd-update-done.service. Oct 2 19:32:04.947738 systemd[1]: Reached target sysinit.target. Oct 2 19:32:04.948964 systemd[1]: Started motdgen.path. Oct 2 19:32:04.949977 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:32:04.952012 systemd[1]: Started logrotate.timer. Oct 2 19:32:04.953036 systemd[1]: Started mdadm.timer. Oct 2 19:32:04.954174 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:32:04.955302 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:32:04.955335 systemd[1]: Reached target paths.target. Oct 2 19:32:04.956319 systemd[1]: Reached target timers.target. Oct 2 19:32:04.957719 systemd[1]: Listening on dbus.socket. Oct 2 19:32:04.959945 systemd[1]: Starting docker.socket... Oct 2 19:32:04.963989 systemd[1]: Listening on sshd.socket. Oct 2 19:32:04.965375 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:32:04.965899 systemd[1]: Listening on docker.socket. Oct 2 19:32:04.967147 systemd[1]: Reached target sockets.target. Oct 2 19:32:04.968393 systemd[1]: Reached target basic.target. Oct 2 19:32:04.969801 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:32:04.969827 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:32:04.971111 systemd[1]: Started amazon-ssm-agent.service. Oct 2 19:32:04.975140 systemd[1]: Starting containerd.service... Oct 2 19:32:04.977279 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 2 19:32:04.980014 systemd[1]: Starting dbus.service... Oct 2 19:32:05.007639 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:32:05.010914 systemd[1]: Starting extend-filesystems.service... Oct 2 19:32:05.012190 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:32:05.023115 systemd[1]: Starting motdgen.service... Oct 2 19:32:05.033380 systemd[1]: Started nvidia.service. Oct 2 19:32:05.036319 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:32:05.039975 systemd[1]: Starting prepare-critools.service... Oct 2 19:32:05.043394 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:32:05.047983 systemd[1]: Starting sshd-keygen.service... Oct 2 19:32:05.054434 systemd[1]: Starting systemd-logind.service... Oct 2 19:32:05.056272 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:32:05.056482 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:32:05.058191 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:32:05.059561 systemd[1]: Starting update-engine.service... Oct 2 19:32:05.063263 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:32:05.152098 jq[1625]: true Oct 2 19:32:05.084528 systemd[1]: Created slice system-sshd.slice. Oct 2 19:32:05.162191 jq[1615]: false Oct 2 19:32:05.165626 tar[1627]: ./ Oct 2 19:32:05.165626 tar[1627]: ./loopback Oct 2 19:32:05.176378 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:32:05.176590 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:32:05.212346 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:32:05.212570 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:32:05.227158 jq[1635]: true Oct 2 19:32:05.227734 dbus-daemon[1614]: [system] SELinux support is enabled Oct 2 19:32:05.227917 systemd[1]: Started dbus.service. Oct 2 19:32:05.235048 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:32:05.235100 systemd[1]: Reached target system-config.target. Oct 2 19:32:05.237073 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:32:05.237098 systemd[1]: Reached target user-config.target. Oct 2 19:32:05.280507 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:32:05.280736 systemd[1]: Finished motdgen.service. Oct 2 19:32:05.284551 tar[1629]: crictl Oct 2 19:32:05.301827 dbus-daemon[1614]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1445 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 2 19:32:05.306484 systemd[1]: Starting systemd-hostnamed.service... Oct 2 19:32:05.458947 extend-filesystems[1616]: Found nvme0n1 Oct 2 19:32:05.464935 extend-filesystems[1616]: Found nvme0n1p1 Oct 2 19:32:05.466081 extend-filesystems[1616]: Found nvme0n1p2 Oct 2 19:32:05.467540 extend-filesystems[1616]: Found nvme0n1p3 Oct 2 19:32:05.469050 extend-filesystems[1616]: Found usr Oct 2 19:32:05.470187 extend-filesystems[1616]: Found nvme0n1p4 Oct 2 19:32:05.471318 extend-filesystems[1616]: Found nvme0n1p6 Oct 2 19:32:05.472550 extend-filesystems[1616]: Found nvme0n1p7 Oct 2 19:32:05.473650 extend-filesystems[1616]: Found nvme0n1p9 Oct 2 19:32:05.474696 extend-filesystems[1616]: Checking size of /dev/nvme0n1p9 Oct 2 19:32:05.518718 amazon-ssm-agent[1611]: 2023/10/02 19:32:05 Failed to load instance info from vault. RegistrationKey does not exist. Oct 2 19:32:05.521760 bash[1675]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:32:05.524588 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:32:05.530725 extend-filesystems[1616]: Resized partition /dev/nvme0n1p9 Oct 2 19:32:05.535364 amazon-ssm-agent[1611]: Initializing new seelog logger Oct 2 19:32:05.535570 amazon-ssm-agent[1611]: New Seelog Logger Creation Complete Oct 2 19:32:05.535660 amazon-ssm-agent[1611]: 2023/10/02 19:32:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 19:32:05.535704 amazon-ssm-agent[1611]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 2 19:32:05.535950 amazon-ssm-agent[1611]: 2023/10/02 19:32:05 processing appconfig overrides Oct 2 19:32:05.550765 extend-filesystems[1682]: resize2fs 1.46.5 (30-Dec-2021) Oct 2 19:32:05.559150 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Oct 2 19:32:05.586808 env[1632]: time="2023-10-02T19:32:05.586742713Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:32:05.605740 update_engine[1624]: I1002 19:32:05.604142 1624 main.cc:92] Flatcar Update Engine starting Oct 2 19:32:05.612690 systemd[1]: Started update-engine.service. Oct 2 19:32:05.613956 update_engine[1624]: I1002 19:32:05.612982 1624 update_check_scheduler.cc:74] Next update check in 5m33s Oct 2 19:32:05.616588 systemd[1]: Started locksmithd.service. Oct 2 19:32:05.627798 systemd-logind[1623]: Watching system buttons on /dev/input/event1 (Power Button) Oct 2 19:32:05.627841 systemd-logind[1623]: Watching system buttons on /dev/input/event2 (Sleep Button) Oct 2 19:32:05.627865 systemd-logind[1623]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 19:32:05.628309 systemd-logind[1623]: New seat seat0. Oct 2 19:32:05.639330 tar[1627]: ./bandwidth Oct 2 19:32:05.652048 systemd[1]: Started systemd-logind.service. Oct 2 19:32:05.655150 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Oct 2 19:32:05.688221 extend-filesystems[1682]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 2 19:32:05.688221 extend-filesystems[1682]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 2 19:32:05.688221 extend-filesystems[1682]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Oct 2 19:32:05.694240 extend-filesystems[1616]: Resized filesystem in /dev/nvme0n1p9 Oct 2 19:32:05.689245 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:32:05.689457 systemd[1]: Finished extend-filesystems.service. Oct 2 19:32:05.747887 systemd[1]: nvidia.service: Deactivated successfully. Oct 2 19:32:05.828684 tar[1627]: ./ptp Oct 2 19:32:05.852501 dbus-daemon[1614]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 2 19:32:05.852679 systemd[1]: Started systemd-hostnamed.service. Oct 2 19:32:05.857328 dbus-daemon[1614]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1659 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 2 19:32:05.863441 systemd[1]: Starting polkit.service... Oct 2 19:32:05.887709 env[1632]: time="2023-10-02T19:32:05.887434799Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:32:05.887709 env[1632]: time="2023-10-02T19:32:05.887624216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:32:05.892760 polkitd[1710]: Started polkitd version 121 Oct 2 19:32:05.896597 env[1632]: time="2023-10-02T19:32:05.895202581Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:32:05.896597 env[1632]: time="2023-10-02T19:32:05.895254319Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:32:05.896597 env[1632]: time="2023-10-02T19:32:05.895539678Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:32:05.896597 env[1632]: time="2023-10-02T19:32:05.895563751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:32:05.896597 env[1632]: time="2023-10-02T19:32:05.895583354Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:32:05.896597 env[1632]: time="2023-10-02T19:32:05.895598507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:32:05.896597 env[1632]: time="2023-10-02T19:32:05.895694476Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:32:05.896597 env[1632]: time="2023-10-02T19:32:05.896018254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:32:05.896597 env[1632]: time="2023-10-02T19:32:05.896218028Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:32:05.896597 env[1632]: time="2023-10-02T19:32:05.896239922Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:32:05.897205 env[1632]: time="2023-10-02T19:32:05.896409434Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:32:05.897205 env[1632]: time="2023-10-02T19:32:05.896428013Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:32:05.902871 env[1632]: time="2023-10-02T19:32:05.902368939Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:32:05.902871 env[1632]: time="2023-10-02T19:32:05.902503973Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:32:05.902871 env[1632]: time="2023-10-02T19:32:05.902526539Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:32:05.902871 env[1632]: time="2023-10-02T19:32:05.902578055Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:32:05.902871 env[1632]: time="2023-10-02T19:32:05.902599970Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:32:05.902871 env[1632]: time="2023-10-02T19:32:05.902620206Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:32:05.902871 env[1632]: time="2023-10-02T19:32:05.902654609Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:32:05.902871 env[1632]: time="2023-10-02T19:32:05.902675176Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:32:05.902871 env[1632]: time="2023-10-02T19:32:05.902695449Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:32:05.902871 env[1632]: time="2023-10-02T19:32:05.902730139Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:32:05.902871 env[1632]: time="2023-10-02T19:32:05.902747209Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:32:05.902871 env[1632]: time="2023-10-02T19:32:05.902765124Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:32:05.905203 env[1632]: time="2023-10-02T19:32:05.904540312Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:32:05.905203 env[1632]: time="2023-10-02T19:32:05.904726252Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:32:05.905551 env[1632]: time="2023-10-02T19:32:05.905511680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:32:05.905739 env[1632]: time="2023-10-02T19:32:05.905677091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:32:05.905739 env[1632]: time="2023-10-02T19:32:05.905705009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:32:05.905901 env[1632]: time="2023-10-02T19:32:05.905882377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:32:05.906189 env[1632]: time="2023-10-02T19:32:05.905984597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:32:05.906189 env[1632]: time="2023-10-02T19:32:05.906009935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:32:05.906189 env[1632]: time="2023-10-02T19:32:05.906027257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:32:05.906189 env[1632]: time="2023-10-02T19:32:05.906046578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:32:05.906189 env[1632]: time="2023-10-02T19:32:05.906065544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:32:05.906189 env[1632]: time="2023-10-02T19:32:05.906084561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:32:05.906189 env[1632]: time="2023-10-02T19:32:05.906103221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:32:05.906189 env[1632]: time="2023-10-02T19:32:05.906121565Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:32:05.906852 env[1632]: time="2023-10-02T19:32:05.906830839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:32:05.906970 env[1632]: time="2023-10-02T19:32:05.906953076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:32:05.907068 env[1632]: time="2023-10-02T19:32:05.907053483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:32:05.907164 env[1632]: time="2023-10-02T19:32:05.907151013Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:32:05.907272 env[1632]: time="2023-10-02T19:32:05.907253221Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:32:05.907421 env[1632]: time="2023-10-02T19:32:05.907355125Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:32:05.907421 env[1632]: time="2023-10-02T19:32:05.907386384Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:32:05.907584 env[1632]: time="2023-10-02T19:32:05.907533027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:32:05.908306 env[1632]: time="2023-10-02T19:32:05.908031486Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:32:05.914179 env[1632]: time="2023-10-02T19:32:05.908466768Z" level=info msg="Connect containerd service" Oct 2 19:32:05.914179 env[1632]: time="2023-10-02T19:32:05.908537261Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:32:05.914179 env[1632]: time="2023-10-02T19:32:05.909932220Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:32:05.914179 env[1632]: time="2023-10-02T19:32:05.910492762Z" level=info msg="Start subscribing containerd event" Oct 2 19:32:05.914179 env[1632]: time="2023-10-02T19:32:05.910558399Z" level=info msg="Start recovering state" Oct 2 19:32:05.914179 env[1632]: time="2023-10-02T19:32:05.910766614Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:32:05.914179 env[1632]: time="2023-10-02T19:32:05.910918023Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:32:05.914179 env[1632]: time="2023-10-02T19:32:05.911633368Z" level=info msg="containerd successfully booted in 0.347718s" Oct 2 19:32:05.911473 systemd[1]: Started containerd.service. Oct 2 19:32:05.947242 polkitd[1710]: Loading rules from directory /etc/polkit-1/rules.d Oct 2 19:32:05.947762 polkitd[1710]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 2 19:32:05.963013 polkitd[1710]: Finished loading, compiling and executing 2 rules Oct 2 19:32:05.965106 dbus-daemon[1614]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 2 19:32:05.966219 env[1632]: time="2023-10-02T19:32:05.910632292Z" level=info msg="Start event monitor" Oct 2 19:32:05.966219 env[1632]: time="2023-10-02T19:32:05.965568809Z" level=info msg="Start snapshots syncer" Oct 2 19:32:05.966219 env[1632]: time="2023-10-02T19:32:05.965595596Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:32:05.966219 env[1632]: time="2023-10-02T19:32:05.965607792Z" level=info msg="Start streaming server" Oct 2 19:32:05.965327 systemd[1]: Started polkit.service. Oct 2 19:32:05.968851 polkitd[1710]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 2 19:32:05.991621 systemd-hostnamed[1659]: Hostname set to <ip-172-31-22-219> (transient) Oct 2 19:32:05.991738 systemd-resolved[1580]: System hostname changed to 'ip-172-31-22-219'. Oct 2 19:32:06.128541 tar[1627]: ./vlan Oct 2 19:32:06.290335 coreos-metadata[1613]: Oct 02 19:32:06.278 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 19:32:06.296235 coreos-metadata[1613]: Oct 02 19:32:06.296 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Oct 2 19:32:06.298392 coreos-metadata[1613]: Oct 02 19:32:06.298 INFO Fetch successful Oct 2 19:32:06.298567 coreos-metadata[1613]: Oct 02 19:32:06.298 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 2 19:32:06.300179 coreos-metadata[1613]: Oct 02 19:32:06.300 INFO Fetch successful Oct 2 19:32:06.306898 unknown[1613]: wrote ssh authorized keys file for user: core Oct 2 19:32:06.342428 update-ssh-keys[1767]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:32:06.343082 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 2 19:32:06.395802 tar[1627]: ./host-device Oct 2 19:32:06.462741 amazon-ssm-agent[1611]: 2023-10-02 19:32:06 INFO Entering SSM Agent hibernate - AccessDeniedException: User: arn:aws:sts::075585003325:assumed-role/jenkins-test/i-049fbc5375d104d4b is not authorized to perform: ssm:UpdateInstanceInformation on resource: arn:aws:ec2:us-west-2:075585003325:instance/i-049fbc5375d104d4b because no identity-based policy allows the ssm:UpdateInstanceInformation action Oct 2 19:32:06.462741 amazon-ssm-agent[1611]: status code: 400, request id: 8b9ea0ac-5b59-462f-99d2-8fc564e68b14 Oct 2 19:32:06.463869 amazon-ssm-agent[1611]: 2023-10-02 19:32:06 INFO Agent is in hibernate mode. Reducing logging. Logging will be reduced to one log per backoff period Oct 2 19:32:06.562479 tar[1627]: ./tuning Oct 2 19:32:06.692430 tar[1627]: ./vrf Oct 2 19:32:06.742831 tar[1627]: ./sbr Oct 2 19:32:06.789165 tar[1627]: ./tap Oct 2 19:32:06.842219 tar[1627]: ./dhcp Oct 2 19:32:07.037877 tar[1627]: ./static Oct 2 19:32:07.107767 tar[1627]: ./firewall Oct 2 19:32:07.194801 tar[1627]: ./macvlan Oct 2 19:32:07.245231 systemd[1]: Finished prepare-critools.service. Oct 2 19:32:07.250571 tar[1627]: ./dummy Oct 2 19:32:07.299741 tar[1627]: ./bridge Oct 2 19:32:07.353265 tar[1627]: ./ipvlan Oct 2 19:32:07.402001 tar[1627]: ./portmap Oct 2 19:32:07.449388 tar[1627]: ./host-local Oct 2 19:32:07.533573 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:32:07.549691 sshd_keygen[1647]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:32:07.576439 systemd[1]: Finished sshd-keygen.service. Oct 2 19:32:07.583946 systemd[1]: Starting issuegen.service... Oct 2 19:32:07.591498 systemd[1]: Started sshd@0-172.31.22.219:22-139.178.89.65:36876.service. Oct 2 19:32:07.599832 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:32:07.600153 systemd[1]: Finished issuegen.service. Oct 2 19:32:07.603640 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:32:07.616647 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:32:07.620786 systemd[1]: Started getty@tty1.service. Oct 2 19:32:07.624882 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:32:07.627489 systemd[1]: Reached target getty.target. Oct 2 19:32:07.628921 systemd[1]: Reached target multi-user.target. Oct 2 19:32:07.632075 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:32:07.644772 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:32:07.644976 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:32:07.647864 systemd[1]: Startup finished in 902ms (kernel) + 10.974s (initrd) + 11.015s (userspace) = 22.893s. Oct 2 19:32:07.714833 locksmithd[1692]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:32:07.847094 sshd[1816]: Accepted publickey for core from 139.178.89.65 port 36876 ssh2: RSA SHA256:3SUoTCyUzcWgmGRSvO30phAsJbW/q0F+muoiwscTNp4 Oct 2 19:32:07.849155 sshd[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:32:07.859848 systemd[1]: Created slice user-500.slice. Oct 2 19:32:07.861393 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:32:07.865886 systemd-logind[1623]: New session 1 of user core. Oct 2 19:32:07.872671 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:32:07.874391 systemd[1]: Starting user@500.service... Oct 2 19:32:07.878346 (systemd)[1826]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:32:08.047373 systemd[1826]: Queued start job for default target default.target. Oct 2 19:32:08.048016 systemd[1826]: Reached target paths.target. Oct 2 19:32:08.048047 systemd[1826]: Reached target sockets.target. Oct 2 19:32:08.048065 systemd[1826]: Reached target timers.target. Oct 2 19:32:08.048081 systemd[1826]: Reached target basic.target. Oct 2 19:32:08.048222 systemd[1]: Started user@500.service. Oct 2 19:32:08.049435 systemd[1]: Started session-1.scope. Oct 2 19:32:08.049989 systemd[1826]: Reached target default.target. Oct 2 19:32:08.050217 systemd[1826]: Startup finished in 165ms. Oct 2 19:32:08.207337 systemd[1]: Started sshd@1-172.31.22.219:22-139.178.89.65:47608.service. Oct 2 19:32:08.374498 sshd[1835]: Accepted publickey for core from 139.178.89.65 port 47608 ssh2: RSA SHA256:3SUoTCyUzcWgmGRSvO30phAsJbW/q0F+muoiwscTNp4 Oct 2 19:32:08.375908 sshd[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:32:08.380924 systemd-logind[1623]: New session 2 of user core. Oct 2 19:32:08.381501 systemd[1]: Started session-2.scope. Oct 2 19:32:08.522377 sshd[1835]: pam_unix(sshd:session): session closed for user core Oct 2 19:32:08.526219 systemd[1]: sshd@1-172.31.22.219:22-139.178.89.65:47608.service: Deactivated successfully. Oct 2 19:32:08.527361 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:32:08.528022 systemd-logind[1623]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:32:08.528904 systemd-logind[1623]: Removed session 2. Oct 2 19:32:08.547105 systemd[1]: Started sshd@2-172.31.22.219:22-139.178.89.65:47618.service. Oct 2 19:32:08.712102 sshd[1841]: Accepted publickey for core from 139.178.89.65 port 47618 ssh2: RSA SHA256:3SUoTCyUzcWgmGRSvO30phAsJbW/q0F+muoiwscTNp4 Oct 2 19:32:08.713844 sshd[1841]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:32:08.721424 systemd-logind[1623]: New session 3 of user core. Oct 2 19:32:08.728457 systemd[1]: Started session-3.scope. Oct 2 19:32:08.849944 sshd[1841]: pam_unix(sshd:session): session closed for user core Oct 2 19:32:08.857204 systemd[1]: sshd@2-172.31.22.219:22-139.178.89.65:47618.service: Deactivated successfully. Oct 2 19:32:08.858333 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:32:08.859009 systemd-logind[1623]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:32:08.860058 systemd-logind[1623]: Removed session 3. Oct 2 19:32:08.883941 systemd[1]: Started sshd@3-172.31.22.219:22-139.178.89.65:47628.service. Oct 2 19:32:09.050035 sshd[1847]: Accepted publickey for core from 139.178.89.65 port 47628 ssh2: RSA SHA256:3SUoTCyUzcWgmGRSvO30phAsJbW/q0F+muoiwscTNp4 Oct 2 19:32:09.052218 sshd[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:32:09.057197 systemd-logind[1623]: New session 4 of user core. Oct 2 19:32:09.058857 systemd[1]: Started session-4.scope. Oct 2 19:32:09.188932 sshd[1847]: pam_unix(sshd:session): session closed for user core Oct 2 19:32:09.193525 systemd[1]: sshd@3-172.31.22.219:22-139.178.89.65:47628.service: Deactivated successfully. Oct 2 19:32:09.194822 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:32:09.195808 systemd-logind[1623]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:32:09.196923 systemd-logind[1623]: Removed session 4. Oct 2 19:32:09.220485 systemd[1]: Started sshd@4-172.31.22.219:22-139.178.89.65:47630.service. Oct 2 19:32:09.395292 sshd[1853]: Accepted publickey for core from 139.178.89.65 port 47630 ssh2: RSA SHA256:3SUoTCyUzcWgmGRSvO30phAsJbW/q0F+muoiwscTNp4 Oct 2 19:32:09.397009 sshd[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:32:09.408684 systemd-logind[1623]: New session 5 of user core. Oct 2 19:32:09.409277 systemd[1]: Started session-5.scope. Oct 2 19:32:09.589222 sudo[1856]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:32:09.589525 sudo[1856]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:32:09.602379 dbus-daemon[1614]: Н:\xb1\x88U: received setenforce notice (enforcing=-20715600) Oct 2 19:32:09.604839 sudo[1856]: pam_unix(sudo:session): session closed for user root Oct 2 19:32:09.629603 sshd[1853]: pam_unix(sshd:session): session closed for user core Oct 2 19:32:09.634187 systemd-logind[1623]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:32:09.634534 systemd[1]: sshd@4-172.31.22.219:22-139.178.89.65:47630.service: Deactivated successfully. Oct 2 19:32:09.635542 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:32:09.636575 systemd-logind[1623]: Removed session 5. Oct 2 19:32:09.656753 systemd[1]: Started sshd@5-172.31.22.219:22-139.178.89.65:47638.service. Oct 2 19:32:09.832051 sshd[1860]: Accepted publickey for core from 139.178.89.65 port 47638 ssh2: RSA SHA256:3SUoTCyUzcWgmGRSvO30phAsJbW/q0F+muoiwscTNp4 Oct 2 19:32:09.833985 sshd[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:32:09.847226 systemd-logind[1623]: New session 6 of user core. Oct 2 19:32:09.847535 systemd[1]: Started session-6.scope. Oct 2 19:32:09.977989 sudo[1864]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:32:09.978448 sudo[1864]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:32:09.990504 sudo[1864]: pam_unix(sudo:session): session closed for user root Oct 2 19:32:10.007508 sudo[1863]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:32:10.007812 sudo[1863]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:32:10.034277 systemd[1]: Stopping audit-rules.service... Oct 2 19:32:10.034000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:32:10.038035 kernel: kauditd_printk_skb: 181 callbacks suppressed Oct 2 19:32:10.040692 kernel: audit: type=1305 audit(1696275130.034:165): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:32:10.040763 auditctl[1867]: No rules Oct 2 19:32:10.038463 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:32:10.038676 systemd[1]: Stopped audit-rules.service. Oct 2 19:32:10.034000 audit[1867]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd5cf90070 a2=420 a3=0 items=0 ppid=1 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:10.042566 systemd[1]: Starting audit-rules.service... Oct 2 19:32:10.050110 kernel: audit: type=1300 audit(1696275130.034:165): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd5cf90070 a2=420 a3=0 items=0 ppid=1 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:10.034000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:32:10.058404 kernel: audit: type=1327 audit(1696275130.034:165): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:32:10.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:10.067151 kernel: audit: type=1131 audit(1696275130.037:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:10.095211 augenrules[1884]: No rules Oct 2 19:32:10.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:10.097620 sudo[1863]: pam_unix(sudo:session): session closed for user root Oct 2 19:32:10.096080 systemd[1]: Finished audit-rules.service. Oct 2 19:32:10.096000 audit[1863]: USER_END pid=1863 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:32:10.109723 kernel: audit: type=1130 audit(1696275130.095:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:10.109856 kernel: audit: type=1106 audit(1696275130.096:168): pid=1863 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:32:10.109886 kernel: audit: type=1104 audit(1696275130.096:169): pid=1863 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:32:10.096000 audit[1863]: CRED_DISP pid=1863 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:32:10.121262 sshd[1860]: pam_unix(sshd:session): session closed for user core Oct 2 19:32:10.123000 audit[1860]: USER_END pid=1860 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:32:10.126304 systemd[1]: sshd@5-172.31.22.219:22-139.178.89.65:47638.service: Deactivated successfully. Oct 2 19:32:10.128488 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:32:10.131186 systemd-logind[1623]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:32:10.132158 kernel: audit: type=1106 audit(1696275130.123:170): pid=1860 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:32:10.132351 kernel: audit: type=1104 audit(1696275130.123:171): pid=1860 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:32:10.123000 audit[1860]: CRED_DISP pid=1860 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:32:10.133200 systemd-logind[1623]: Removed session 6. Oct 2 19:32:10.164137 kernel: audit: type=1131 audit(1696275130.123:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.22.219:22-139.178.89.65:47638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:10.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.22.219:22-139.178.89.65:47638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:10.165562 systemd[1]: Started sshd@6-172.31.22.219:22-139.178.89.65:47642.service. Oct 2 19:32:10.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.22.219:22-139.178.89.65:47642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:10.339000 audit[1890]: USER_ACCT pid=1890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:32:10.341325 sshd[1890]: Accepted publickey for core from 139.178.89.65 port 47642 ssh2: RSA SHA256:3SUoTCyUzcWgmGRSvO30phAsJbW/q0F+muoiwscTNp4 Oct 2 19:32:10.341000 audit[1890]: CRED_ACQ pid=1890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:32:10.341000 audit[1890]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcd9631d40 a2=3 a3=0 items=0 ppid=1 pid=1890 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:10.341000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:32:10.343404 sshd[1890]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:32:10.349570 systemd[1]: Started session-7.scope. Oct 2 19:32:10.351339 systemd-logind[1623]: New session 7 of user core. Oct 2 19:32:10.372000 audit[1890]: USER_START pid=1890 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:32:10.378000 audit[1892]: CRED_ACQ pid=1892 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:32:10.472000 audit[1893]: USER_ACCT pid=1893 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:32:10.473789 sudo[1893]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:32:10.472000 audit[1893]: CRED_REFR pid=1893 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:32:10.474105 sudo[1893]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:32:10.475000 audit[1893]: USER_START pid=1893 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:32:11.124812 systemd[1]: Reloading. Oct 2 19:32:11.249102 /usr/lib/systemd/system-generators/torcx-generator[1925]: time="2023-10-02T19:32:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:32:11.249578 /usr/lib/systemd/system-generators/torcx-generator[1925]: time="2023-10-02T19:32:11Z" level=info msg="torcx already run" Oct 2 19:32:11.377567 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:32:11.377591 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:32:11.415140 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:32:11.499000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.499000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.499000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.499000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.499000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.499000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.499000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.499000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.499000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.499000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.499000 audit: BPF prog-id=37 op=LOAD Oct 2 19:32:11.499000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:32:11.500000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.500000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.500000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.500000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.500000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.500000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.500000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.500000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.500000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.500000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.500000 audit: BPF prog-id=38 op=LOAD Oct 2 19:32:11.500000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit: BPF prog-id=39 op=LOAD Oct 2 19:32:11.504000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit: BPF prog-id=40 op=LOAD Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.504000 audit: BPF prog-id=41 op=LOAD Oct 2 19:32:11.504000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:32:11.504000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit: BPF prog-id=42 op=LOAD Oct 2 19:32:11.507000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit: BPF prog-id=43 op=LOAD Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.507000 audit: BPF prog-id=44 op=LOAD Oct 2 19:32:11.507000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:32:11.507000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:32:11.510000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.510000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.510000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.510000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.510000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.510000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.510000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.510000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.510000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.511000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.511000 audit: BPF prog-id=45 op=LOAD Oct 2 19:32:11.511000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:32:11.511000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.511000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.511000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.511000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.511000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.511000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.511000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.511000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.511000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.511000 audit: BPF prog-id=46 op=LOAD Oct 2 19:32:11.511000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.511000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.511000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.511000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.511000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.511000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.511000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.511000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.511000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.511000 audit: BPF prog-id=47 op=LOAD Oct 2 19:32:11.511000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:32:11.511000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit: BPF prog-id=48 op=LOAD Oct 2 19:32:11.514000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit: BPF prog-id=49 op=LOAD Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.514000 audit: BPF prog-id=50 op=LOAD Oct 2 19:32:11.514000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:32:11.514000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:32:11.515000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.515000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.515000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.515000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.515000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.515000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.515000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.515000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.515000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.516000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.516000 audit: BPF prog-id=51 op=LOAD Oct 2 19:32:11.516000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:32:11.516000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.516000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.516000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.516000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.516000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.516000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.516000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.516000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.516000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.516000 audit: BPF prog-id=52 op=LOAD Oct 2 19:32:11.516000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.516000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.516000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.516000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.516000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.516000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.516000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.516000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.516000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.516000 audit: BPF prog-id=53 op=LOAD Oct 2 19:32:11.516000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:32:11.516000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:32:11.517000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.517000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.517000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.517000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.517000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.517000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.517000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.517000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.517000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.517000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:11.517000 audit: BPF prog-id=54 op=LOAD Oct 2 19:32:11.517000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:32:11.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:11.533573 systemd[1]: Started kubelet.service. Oct 2 19:32:11.553430 systemd[1]: Starting coreos-metadata.service... Oct 2 19:32:11.637704 kubelet[1975]: E1002 19:32:11.637480 1975 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 2 19:32:11.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:32:11.642991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:32:11.643186 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:32:11.687884 coreos-metadata[1982]: Oct 02 19:32:11.687 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 2 19:32:11.691806 coreos-metadata[1982]: Oct 02 19:32:11.691 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Oct 2 19:32:11.693140 coreos-metadata[1982]: Oct 02 19:32:11.693 INFO Fetch successful Oct 2 19:32:11.693140 coreos-metadata[1982]: Oct 02 19:32:11.693 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Oct 2 19:32:11.693917 coreos-metadata[1982]: Oct 02 19:32:11.693 INFO Fetch successful Oct 2 19:32:11.693994 coreos-metadata[1982]: Oct 02 19:32:11.693 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Oct 2 19:32:11.694906 coreos-metadata[1982]: Oct 02 19:32:11.694 INFO Fetch successful Oct 2 19:32:11.695008 coreos-metadata[1982]: Oct 02 19:32:11.694 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Oct 2 19:32:11.695492 coreos-metadata[1982]: Oct 02 19:32:11.695 INFO Fetch successful Oct 2 19:32:11.695563 coreos-metadata[1982]: Oct 02 19:32:11.695 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Oct 2 19:32:11.695925 coreos-metadata[1982]: Oct 02 19:32:11.695 INFO Fetch successful Oct 2 19:32:11.695992 coreos-metadata[1982]: Oct 02 19:32:11.695 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Oct 2 19:32:11.696336 coreos-metadata[1982]: Oct 02 19:32:11.696 INFO Fetch successful Oct 2 19:32:11.696401 coreos-metadata[1982]: Oct 02 19:32:11.696 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Oct 2 19:32:11.696752 coreos-metadata[1982]: Oct 02 19:32:11.696 INFO Fetch successful Oct 2 19:32:11.696839 coreos-metadata[1982]: Oct 02 19:32:11.696 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Oct 2 19:32:11.697185 coreos-metadata[1982]: Oct 02 19:32:11.697 INFO Fetch successful Oct 2 19:32:11.707942 systemd[1]: Finished coreos-metadata.service. Oct 2 19:32:11.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:12.126494 systemd[1]: Stopped kubelet.service. Oct 2 19:32:12.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:12.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:12.146996 systemd[1]: Reloading. Oct 2 19:32:12.253604 /usr/lib/systemd/system-generators/torcx-generator[2038]: time="2023-10-02T19:32:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:32:12.254061 /usr/lib/systemd/system-generators/torcx-generator[2038]: time="2023-10-02T19:32:12Z" level=info msg="torcx already run" Oct 2 19:32:12.351549 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:32:12.351577 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:32:12.377330 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:32:12.464000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.464000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.464000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.464000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.464000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.464000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.464000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.464000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.464000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.464000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.464000 audit: BPF prog-id=55 op=LOAD Oct 2 19:32:12.464000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:32:12.465000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.465000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.465000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.465000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.465000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.465000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.465000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.465000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.465000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.466000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.466000 audit: BPF prog-id=56 op=LOAD Oct 2 19:32:12.466000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:32:12.468000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.468000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.468000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.468000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.468000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.468000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.468000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.468000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.468000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.468000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.468000 audit: BPF prog-id=57 op=LOAD Oct 2 19:32:12.468000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:32:12.469000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.469000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.469000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.469000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.469000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.469000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.469000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.469000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.469000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.469000 audit: BPF prog-id=58 op=LOAD Oct 2 19:32:12.469000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.469000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.469000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.469000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.469000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.469000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.469000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.469000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.469000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.469000 audit: BPF prog-id=59 op=LOAD Oct 2 19:32:12.469000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:32:12.469000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:32:12.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit: BPF prog-id=60 op=LOAD Oct 2 19:32:12.472000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit: BPF prog-id=61 op=LOAD Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.472000 audit: BPF prog-id=62 op=LOAD Oct 2 19:32:12.472000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:32:12.472000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:32:12.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.475000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.475000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.476000 audit: BPF prog-id=63 op=LOAD Oct 2 19:32:12.476000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:32:12.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.476000 audit: BPF prog-id=64 op=LOAD Oct 2 19:32:12.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.476000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.476000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.476000 audit: BPF prog-id=65 op=LOAD Oct 2 19:32:12.476000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:32:12.476000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:32:12.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.478000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.478000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit: BPF prog-id=66 op=LOAD Oct 2 19:32:12.479000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit: BPF prog-id=67 op=LOAD Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit: BPF prog-id=68 op=LOAD Oct 2 19:32:12.479000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:32:12.479000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.479000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.480000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.480000 audit: BPF prog-id=69 op=LOAD Oct 2 19:32:12.480000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:32:12.480000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.480000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.480000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.480000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.480000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.480000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.480000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.480000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.480000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.480000 audit: BPF prog-id=70 op=LOAD Oct 2 19:32:12.480000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.480000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.480000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.480000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.480000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.480000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.480000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.480000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.480000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.480000 audit: BPF prog-id=71 op=LOAD Oct 2 19:32:12.480000 audit: BPF prog-id=52 op=UNLOAD Oct 2 19:32:12.480000 audit: BPF prog-id=53 op=UNLOAD Oct 2 19:32:12.481000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.481000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.481000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.481000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.481000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.481000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.481000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.481000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.481000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.481000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:12.481000 audit: BPF prog-id=72 op=LOAD Oct 2 19:32:12.481000 audit: BPF prog-id=54 op=UNLOAD Oct 2 19:32:12.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:12.505672 systemd[1]: Started kubelet.service. Oct 2 19:32:12.564241 kubelet[2091]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:32:12.564603 kubelet[2091]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 2 19:32:12.564644 kubelet[2091]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:32:12.564754 kubelet[2091]: I1002 19:32:12.564725 2091 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:32:13.217080 kubelet[2091]: I1002 19:32:13.217043 2091 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Oct 2 19:32:13.217080 kubelet[2091]: I1002 19:32:13.217076 2091 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:32:13.217574 kubelet[2091]: I1002 19:32:13.217532 2091 server.go:895] "Client rotation is on, will bootstrap in background" Oct 2 19:32:13.223298 kubelet[2091]: I1002 19:32:13.223271 2091 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:32:13.238881 kubelet[2091]: I1002 19:32:13.238811 2091 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:32:13.239467 kubelet[2091]: I1002 19:32:13.239441 2091 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:32:13.239941 kubelet[2091]: I1002 19:32:13.239840 2091 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 2 19:32:13.239941 kubelet[2091]: I1002 19:32:13.239937 2091 topology_manager.go:138] "Creating topology manager with none policy" Oct 2 19:32:13.240274 kubelet[2091]: I1002 19:32:13.239953 2091 container_manager_linux.go:301] "Creating device plugin manager" Oct 2 19:32:13.240274 kubelet[2091]: I1002 19:32:13.240148 2091 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:32:13.240274 kubelet[2091]: I1002 19:32:13.240255 2091 kubelet.go:393] "Attempting to sync node with API server" Oct 2 19:32:13.240450 kubelet[2091]: I1002 19:32:13.240279 2091 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:32:13.240450 kubelet[2091]: I1002 19:32:13.240346 2091 kubelet.go:309] "Adding apiserver pod source" Oct 2 19:32:13.240450 kubelet[2091]: I1002 19:32:13.240397 2091 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:32:13.241001 kubelet[2091]: E1002 19:32:13.240987 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:13.241553 kubelet[2091]: E1002 19:32:13.241079 2091 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:13.241983 kubelet[2091]: I1002 19:32:13.241968 2091 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:32:13.243597 kubelet[2091]: W1002 19:32:13.243581 2091 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:32:13.244722 kubelet[2091]: I1002 19:32:13.244708 2091 server.go:1232] "Started kubelet" Oct 2 19:32:13.245263 kubelet[2091]: I1002 19:32:13.245245 2091 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:32:13.245663 kubelet[2091]: I1002 19:32:13.245644 2091 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Oct 2 19:32:13.246600 kubelet[2091]: I1002 19:32:13.246581 2091 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 2 19:32:13.247550 kubelet[2091]: I1002 19:32:13.246848 2091 server.go:462] "Adding debug handlers to kubelet server" Oct 2 19:32:13.249540 kubelet[2091]: E1002 19:32:13.249525 2091 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:32:13.249791 kubelet[2091]: E1002 19:32:13.249775 2091 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:32:13.250000 audit[2091]: AVC avc: denied { mac_admin } for pid=2091 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:13.250000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:32:13.250000 audit[2091]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000be6660 a1=c000773710 a2=c000be6630 a3=25 items=0 ppid=1 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:13.250000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:32:13.250000 audit[2091]: AVC avc: denied { mac_admin } for pid=2091 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:13.250000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:32:13.250000 audit[2091]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b4efe0 a1=c000773728 a2=c000be66f0 a3=25 items=0 ppid=1 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:13.250000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:32:13.253009 kubelet[2091]: I1002 19:32:13.250957 2091 kubelet.go:1386] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:32:13.253009 kubelet[2091]: I1002 19:32:13.251003 2091 kubelet.go:1390] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:32:13.253009 kubelet[2091]: I1002 19:32:13.251141 2091 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:32:13.259003 kubelet[2091]: E1002 19:32:13.258318 2091 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.22.219\" not found" Oct 2 19:32:13.259003 kubelet[2091]: I1002 19:32:13.258359 2091 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 2 19:32:13.259003 kubelet[2091]: I1002 19:32:13.258484 2091 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 2 19:32:13.259003 kubelet[2091]: I1002 19:32:13.258553 2091 reconciler_new.go:29] "Reconciler: start to sync state" Oct 2 19:32:13.270273 kubelet[2091]: E1002 19:32:13.270078 2091 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.22.219\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Oct 2 19:32:13.295239 kubelet[2091]: E1002 19:32:13.295068 2091 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.219.178a613f1660a1d9", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.219", UID:"172.31.22.219", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.219"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 244678617, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 244678617, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.22.219"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:32:13.296925 kubelet[2091]: W1002 19:32:13.296890 2091 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.22.219" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:32:13.297064 kubelet[2091]: E1002 19:32:13.296959 2091 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.22.219" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:32:13.297064 kubelet[2091]: W1002 19:32:13.297026 2091 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:32:13.297064 kubelet[2091]: E1002 19:32:13.297041 2091 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:32:13.297215 kubelet[2091]: W1002 19:32:13.297097 2091 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:32:13.297215 kubelet[2091]: E1002 19:32:13.297116 2091 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:32:13.307023 kubelet[2091]: E1002 19:32:13.306886 2091 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.219.178a613f16ac6787", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.219", UID:"172.31.22.219", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.219"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 249644423, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 249644423, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.22.219"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:32:13.320058 kubelet[2091]: I1002 19:32:13.320034 2091 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 19:32:13.320309 kubelet[2091]: I1002 19:32:13.320298 2091 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 19:32:13.320392 kubelet[2091]: I1002 19:32:13.320384 2091 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:32:13.321121 kubelet[2091]: E1002 19:32:13.320999 2091 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.219.178a613f1ad10b0a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.219", UID:"172.31.22.219", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.22.219 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.219"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 319154442, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 319154442, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.22.219"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:32:13.322678 kubelet[2091]: E1002 19:32:13.322597 2091 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.219.178a613f1ad12427", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.219", UID:"172.31.22.219", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.22.219 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.219"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 319160871, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 319160871, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.22.219"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:32:13.323155 kubelet[2091]: I1002 19:32:13.323098 2091 policy_none.go:49] "None policy: Start" Oct 2 19:32:13.323910 kubelet[2091]: I1002 19:32:13.323897 2091 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 19:32:13.324016 kubelet[2091]: I1002 19:32:13.324007 2091 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:32:13.325468 kubelet[2091]: E1002 19:32:13.325391 2091 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.219.178a613f1ad13a18", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.219", UID:"172.31.22.219", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.22.219 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.219"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 319166488, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 319166488, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.22.219"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:32:13.331052 systemd[1]: Created slice kubepods.slice. Oct 2 19:32:13.338480 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:32:13.343177 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:32:13.342000 audit[2106]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=2106 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:13.342000 audit[2106]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffd962e4a0 a2=0 a3=7fffd962e48c items=0 ppid=2091 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:13.342000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:32:13.346000 audit[2109]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:13.346000 audit[2109]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffd8c1a7e70 a2=0 a3=7ffd8c1a7e5c items=0 ppid=2091 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:13.346000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:32:13.353324 kubelet[2091]: I1002 19:32:13.353296 2091 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:32:13.353443 kubelet[2091]: I1002 19:32:13.353368 2091 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:32:13.352000 audit[2091]: AVC avc: denied { mac_admin } for pid=2091 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:13.352000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:32:13.352000 audit[2091]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c6fe90 a1=c000b4a168 a2=c000c6fe60 a3=25 items=0 ppid=1 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:13.352000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:32:13.354046 kubelet[2091]: I1002 19:32:13.353795 2091 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:32:13.355919 kubelet[2091]: E1002 19:32:13.354953 2091 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.22.219\" not found" Oct 2 19:32:13.357569 kubelet[2091]: E1002 19:32:13.357279 2091 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.219.178a613f1d0345b0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.219", UID:"172.31.22.219", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.219"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 356000688, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 356000688, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.22.219"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:32:13.359712 kubelet[2091]: I1002 19:32:13.359696 2091 kubelet_node_status.go:70] "Attempting to register node" node="172.31.22.219" Oct 2 19:32:13.360983 kubelet[2091]: E1002 19:32:13.360962 2091 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.22.219" Oct 2 19:32:13.361509 kubelet[2091]: E1002 19:32:13.361436 2091 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.219.178a613f1ad10b0a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.219", UID:"172.31.22.219", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.22.219 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.219"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 319154442, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 359650352, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.22.219"}': 'events "172.31.22.219.178a613f1ad10b0a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:32:13.362533 kubelet[2091]: E1002 19:32:13.362449 2091 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.219.178a613f1ad12427", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.219", UID:"172.31.22.219", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.22.219 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.219"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 319160871, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 359660993, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.22.219"}': 'events "172.31.22.219.178a613f1ad12427" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:32:13.363536 kubelet[2091]: E1002 19:32:13.363472 2091 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.219.178a613f1ad13a18", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.219", UID:"172.31.22.219", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.22.219 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.219"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 319166488, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 359664966, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.22.219"}': 'events "172.31.22.219.178a613f1ad13a18" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:32:13.349000 audit[2111]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=2111 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:13.349000 audit[2111]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff3a9e6f00 a2=0 a3=7fff3a9e6eec items=0 ppid=2091 pid=2111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:13.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:32:13.369000 audit[2116]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=2116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:13.369000 audit[2116]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcce0c1b70 a2=0 a3=7ffcce0c1b5c items=0 ppid=2091 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:13.369000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:32:13.419000 audit[2121]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:13.419000 audit[2121]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcbb81e340 a2=0 a3=7ffcbb81e32c items=0 ppid=2091 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:13.419000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:32:13.421457 kubelet[2091]: I1002 19:32:13.421432 2091 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 2 19:32:13.421000 audit[2122]: NETFILTER_CFG table=mangle:7 family=10 entries=2 op=nft_register_chain pid=2122 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:13.421000 audit[2122]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff94e5c820 a2=0 a3=7fff94e5c80c items=0 ppid=2091 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:13.421000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:32:13.423414 kubelet[2091]: I1002 19:32:13.423385 2091 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 2 19:32:13.423490 kubelet[2091]: I1002 19:32:13.423420 2091 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 2 19:32:13.423490 kubelet[2091]: I1002 19:32:13.423450 2091 kubelet.go:2303] "Starting kubelet main sync loop" Oct 2 19:32:13.423576 kubelet[2091]: E1002 19:32:13.423507 2091 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:32:13.422000 audit[2123]: NETFILTER_CFG table=mangle:8 family=2 entries=1 op=nft_register_chain pid=2123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:13.422000 audit[2123]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc6132b2a0 a2=0 a3=7ffc6132b28c items=0 ppid=2091 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:13.422000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:32:13.425345 kubelet[2091]: W1002 19:32:13.425327 2091 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:32:13.424000 audit[2124]: NETFILTER_CFG table=mangle:9 family=10 entries=1 op=nft_register_chain pid=2124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:13.424000 audit[2124]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc994bc9d0 a2=0 a3=7ffc994bc9bc items=0 ppid=2091 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:13.424000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:32:13.425761 kubelet[2091]: E1002 19:32:13.425740 2091 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:32:13.425000 audit[2125]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:13.425000 audit[2125]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffdd2d51880 a2=0 a3=7ffdd2d5186c items=0 ppid=2091 pid=2125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:13.425000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:32:13.426000 audit[2126]: NETFILTER_CFG table=nat:11 family=10 entries=2 op=nft_register_chain pid=2126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:13.426000 audit[2126]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fffb06cd430 a2=0 a3=7fffb06cd41c items=0 ppid=2091 pid=2126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:13.426000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:32:13.427000 audit[2127]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_chain pid=2127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:13.427000 audit[2127]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3d27ec50 a2=0 a3=7ffe3d27ec3c items=0 ppid=2091 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:13.427000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:32:13.428000 audit[2128]: NETFILTER_CFG table=filter:13 family=10 entries=2 op=nft_register_chain pid=2128 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:13.428000 audit[2128]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffeae9e8530 a2=0 a3=7ffeae9e851c items=0 ppid=2091 pid=2128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:13.428000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:32:13.475873 kubelet[2091]: E1002 19:32:13.475760 2091 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.22.219\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Oct 2 19:32:13.562157 kubelet[2091]: I1002 19:32:13.562109 2091 kubelet_node_status.go:70] "Attempting to register node" node="172.31.22.219" Oct 2 19:32:13.564321 kubelet[2091]: E1002 19:32:13.564298 2091 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.22.219" Oct 2 19:32:13.564850 kubelet[2091]: E1002 19:32:13.564232 2091 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.219.178a613f1ad10b0a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.219", UID:"172.31.22.219", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.22.219 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.219"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 319154442, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 562060608, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.22.219"}': 'events "172.31.22.219.178a613f1ad10b0a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:32:13.566051 kubelet[2091]: E1002 19:32:13.565968 2091 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.219.178a613f1ad12427", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.219", UID:"172.31.22.219", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.22.219 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.219"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 319160871, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 562069015, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.22.219"}': 'events "172.31.22.219.178a613f1ad12427" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:32:13.573391 kubelet[2091]: E1002 19:32:13.573297 2091 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.219.178a613f1ad13a18", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.219", UID:"172.31.22.219", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.22.219 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.219"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 319166488, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 562072835, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.22.219"}': 'events "172.31.22.219.178a613f1ad13a18" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:32:13.878748 kubelet[2091]: E1002 19:32:13.878629 2091 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.22.219\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Oct 2 19:32:13.965975 kubelet[2091]: I1002 19:32:13.965947 2091 kubelet_node_status.go:70] "Attempting to register node" node="172.31.22.219" Oct 2 19:32:13.967408 kubelet[2091]: E1002 19:32:13.967324 2091 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.219.178a613f1ad10b0a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.219", UID:"172.31.22.219", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.22.219 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.219"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 319154442, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 965900224, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.22.219"}': 'events "172.31.22.219.178a613f1ad10b0a" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:32:13.967686 kubelet[2091]: E1002 19:32:13.967658 2091 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.22.219" Oct 2 19:32:13.968513 kubelet[2091]: E1002 19:32:13.968447 2091 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.219.178a613f1ad12427", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.219", UID:"172.31.22.219", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.22.219 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.219"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 319160871, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 965907752, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.22.219"}': 'events "172.31.22.219.178a613f1ad12427" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:32:13.973645 kubelet[2091]: E1002 19:32:13.973565 2091 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.22.219.178a613f1ad13a18", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.22.219", UID:"172.31.22.219", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.22.219 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.22.219"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 319166488, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 32, 13, 965912626, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.22.219"}': 'events "172.31.22.219.178a613f1ad13a18" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:32:14.221636 kubelet[2091]: I1002 19:32:14.221585 2091 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:32:14.242099 kubelet[2091]: E1002 19:32:14.242051 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:14.655929 kubelet[2091]: E1002 19:32:14.655825 2091 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.22.219" not found Oct 2 19:32:14.687914 kubelet[2091]: E1002 19:32:14.687798 2091 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.22.219\" not found" node="172.31.22.219" Oct 2 19:32:14.769233 kubelet[2091]: I1002 19:32:14.769199 2091 kubelet_node_status.go:70] "Attempting to register node" node="172.31.22.219" Oct 2 19:32:14.774587 kubelet[2091]: I1002 19:32:14.774554 2091 kubelet_node_status.go:73] "Successfully registered node" node="172.31.22.219" Oct 2 19:32:14.795475 kubelet[2091]: I1002 19:32:14.795447 2091 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:32:14.795919 env[1632]: time="2023-10-02T19:32:14.795828383Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:32:14.796352 kubelet[2091]: I1002 19:32:14.796067 2091 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:32:15.040566 sudo[1893]: pam_unix(sudo:session): session closed for user root Oct 2 19:32:15.052786 kernel: kauditd_printk_skb: 477 callbacks suppressed Oct 2 19:32:15.052856 kernel: audit: type=1106 audit(1696275135.039:615): pid=1893 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:32:15.039000 audit[1893]: USER_END pid=1893 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:32:15.039000 audit[1893]: CRED_DISP pid=1893 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:32:15.069462 kernel: audit: type=1104 audit(1696275135.039:616): pid=1893 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:32:15.070282 sshd[1890]: pam_unix(sshd:session): session closed for user core Oct 2 19:32:15.072000 audit[1890]: USER_END pid=1890 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:32:15.077480 systemd[1]: sshd@6-172.31.22.219:22-139.178.89.65:47642.service: Deactivated successfully. Oct 2 19:32:15.078662 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:32:15.080314 systemd-logind[1623]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:32:15.072000 audit[1890]: CRED_DISP pid=1890 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:32:15.082936 systemd-logind[1623]: Removed session 7. Oct 2 19:32:15.089210 kernel: audit: type=1106 audit(1696275135.072:617): pid=1890 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:32:15.089349 kernel: audit: type=1104 audit(1696275135.072:618): pid=1890 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Oct 2 19:32:15.089380 kernel: audit: type=1131 audit(1696275135.072:619): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.22.219:22-139.178.89.65:47642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:15.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.22.219:22-139.178.89.65:47642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:15.241556 kubelet[2091]: I1002 19:32:15.241508 2091 apiserver.go:52] "Watching apiserver" Oct 2 19:32:15.242612 kubelet[2091]: E1002 19:32:15.242585 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:15.244289 kubelet[2091]: I1002 19:32:15.244265 2091 topology_manager.go:215] "Topology Admit Handler" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" podNamespace="kube-system" podName="cilium-lgvxf" Oct 2 19:32:15.244690 kubelet[2091]: I1002 19:32:15.244542 2091 topology_manager.go:215] "Topology Admit Handler" podUID="089188a3-d007-498e-a895-4cb65e5c142c" podNamespace="kube-system" podName="kube-proxy-wgbh5" Oct 2 19:32:15.252679 systemd[1]: Created slice kubepods-burstable-poda1eeba4e_db02_4287_bf8e_d8bd41c720f8.slice. Oct 2 19:32:15.262508 kubelet[2091]: I1002 19:32:15.262478 2091 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 2 19:32:15.269793 kubelet[2091]: I1002 19:32:15.268628 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-lib-modules\") pod \"cilium-lgvxf\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " pod="kube-system/cilium-lgvxf" Oct 2 19:32:15.269793 kubelet[2091]: I1002 19:32:15.268680 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sjj5\" (UniqueName: \"kubernetes.io/projected/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-kube-api-access-2sjj5\") pod \"cilium-lgvxf\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " pod="kube-system/cilium-lgvxf" Oct 2 19:32:15.269793 kubelet[2091]: I1002 19:32:15.268711 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-cilium-run\") pod \"cilium-lgvxf\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " pod="kube-system/cilium-lgvxf" Oct 2 19:32:15.269793 kubelet[2091]: I1002 19:32:15.268742 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-etc-cni-netd\") pod \"cilium-lgvxf\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " pod="kube-system/cilium-lgvxf" Oct 2 19:32:15.269793 kubelet[2091]: I1002 19:32:15.268772 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-xtables-lock\") pod \"cilium-lgvxf\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " pod="kube-system/cilium-lgvxf" Oct 2 19:32:15.269793 kubelet[2091]: I1002 19:32:15.268799 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-bpf-maps\") pod \"cilium-lgvxf\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " pod="kube-system/cilium-lgvxf" Oct 2 19:32:15.271856 kubelet[2091]: I1002 19:32:15.268826 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-cilium-cgroup\") pod \"cilium-lgvxf\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " pod="kube-system/cilium-lgvxf" Oct 2 19:32:15.271856 kubelet[2091]: I1002 19:32:15.269606 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lt75\" (UniqueName: \"kubernetes.io/projected/089188a3-d007-498e-a895-4cb65e5c142c-kube-api-access-5lt75\") pod \"kube-proxy-wgbh5\" (UID: \"089188a3-d007-498e-a895-4cb65e5c142c\") " pod="kube-system/kube-proxy-wgbh5" Oct 2 19:32:15.271856 kubelet[2091]: I1002 19:32:15.269785 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-cni-path\") pod \"cilium-lgvxf\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " pod="kube-system/cilium-lgvxf" Oct 2 19:32:15.271856 kubelet[2091]: I1002 19:32:15.269822 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-cilium-config-path\") pod \"cilium-lgvxf\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " pod="kube-system/cilium-lgvxf" Oct 2 19:32:15.271856 kubelet[2091]: I1002 19:32:15.271468 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-host-proc-sys-net\") pod \"cilium-lgvxf\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " pod="kube-system/cilium-lgvxf" Oct 2 19:32:15.272203 kubelet[2091]: I1002 19:32:15.271514 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-host-proc-sys-kernel\") pod \"cilium-lgvxf\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " pod="kube-system/cilium-lgvxf" Oct 2 19:32:15.272203 kubelet[2091]: I1002 19:32:15.271555 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-hubble-tls\") pod \"cilium-lgvxf\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " pod="kube-system/cilium-lgvxf" Oct 2 19:32:15.272203 kubelet[2091]: I1002 19:32:15.271582 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/089188a3-d007-498e-a895-4cb65e5c142c-xtables-lock\") pod \"kube-proxy-wgbh5\" (UID: \"089188a3-d007-498e-a895-4cb65e5c142c\") " pod="kube-system/kube-proxy-wgbh5" Oct 2 19:32:15.272203 kubelet[2091]: I1002 19:32:15.271623 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/089188a3-d007-498e-a895-4cb65e5c142c-lib-modules\") pod \"kube-proxy-wgbh5\" (UID: \"089188a3-d007-498e-a895-4cb65e5c142c\") " pod="kube-system/kube-proxy-wgbh5" Oct 2 19:32:15.272203 kubelet[2091]: I1002 19:32:15.271668 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-clustermesh-secrets\") pod \"cilium-lgvxf\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " pod="kube-system/cilium-lgvxf" Oct 2 19:32:15.273088 kubelet[2091]: I1002 19:32:15.271760 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/089188a3-d007-498e-a895-4cb65e5c142c-kube-proxy\") pod \"kube-proxy-wgbh5\" (UID: \"089188a3-d007-498e-a895-4cb65e5c142c\") " pod="kube-system/kube-proxy-wgbh5" Oct 2 19:32:15.273088 kubelet[2091]: I1002 19:32:15.271792 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-hostproc\") pod \"cilium-lgvxf\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " pod="kube-system/cilium-lgvxf" Oct 2 19:32:15.283096 systemd[1]: Created slice kubepods-besteffort-pod089188a3_d007_498e_a895_4cb65e5c142c.slice. Oct 2 19:32:15.579142 env[1632]: time="2023-10-02T19:32:15.579072797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lgvxf,Uid:a1eeba4e-db02-4287-bf8e-d8bd41c720f8,Namespace:kube-system,Attempt:0,}" Oct 2 19:32:15.593570 env[1632]: time="2023-10-02T19:32:15.593527247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wgbh5,Uid:089188a3-d007-498e-a895-4cb65e5c142c,Namespace:kube-system,Attempt:0,}" Oct 2 19:32:16.230425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1113524803.mount: Deactivated successfully. Oct 2 19:32:16.240471 env[1632]: time="2023-10-02T19:32:16.240416541Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:16.241782 env[1632]: time="2023-10-02T19:32:16.241738186Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:16.243038 kubelet[2091]: E1002 19:32:16.242992 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:16.247721 env[1632]: time="2023-10-02T19:32:16.247677955Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:16.249559 env[1632]: time="2023-10-02T19:32:16.249516461Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:16.250763 env[1632]: time="2023-10-02T19:32:16.250732188Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:16.252747 env[1632]: time="2023-10-02T19:32:16.252714170Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:16.253764 env[1632]: time="2023-10-02T19:32:16.253732877Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:16.256607 env[1632]: time="2023-10-02T19:32:16.256568376Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:16.285828 env[1632]: time="2023-10-02T19:32:16.276991714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:32:16.285828 env[1632]: time="2023-10-02T19:32:16.277038085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:32:16.285828 env[1632]: time="2023-10-02T19:32:16.277054672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:32:16.285828 env[1632]: time="2023-10-02T19:32:16.277232159Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/71586c8465d13278f25573a6634905175b65a9afdbaf21c619ee862f04572b6d pid=2142 runtime=io.containerd.runc.v2 Oct 2 19:32:16.299195 env[1632]: time="2023-10-02T19:32:16.298107077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:32:16.299195 env[1632]: time="2023-10-02T19:32:16.298243136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:32:16.299195 env[1632]: time="2023-10-02T19:32:16.298264928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:32:16.299557 env[1632]: time="2023-10-02T19:32:16.299252579Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68 pid=2159 runtime=io.containerd.runc.v2 Oct 2 19:32:16.318048 systemd[1]: Started cri-containerd-090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68.scope. Oct 2 19:32:16.320425 systemd[1]: Started cri-containerd-71586c8465d13278f25573a6634905175b65a9afdbaf21c619ee862f04572b6d.scope. Oct 2 19:32:16.354000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.373631 kernel: audit: type=1400 audit(1696275136.354:620): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.373818 kernel: audit: type=1400 audit(1696275136.354:621): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.354000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.354000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383940 kernel: audit: type=1400 audit(1696275136.354:622): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.384163 kernel: audit: type=1400 audit(1696275136.354:623): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.354000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.388152 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:32:16.354000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.354000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.354000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.354000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.354000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.366000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.366000 audit: BPF prog-id=73 op=LOAD Oct 2 19:32:16.367000 audit[2158]: AVC avc: denied { bpf } for pid=2158 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.367000 audit[2158]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2142 pid=2158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:16.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353836633834363564313332373866323535373361363633343930 Oct 2 19:32:16.367000 audit[2158]: AVC avc: denied { perfmon } for pid=2158 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.367000 audit[2158]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2142 pid=2158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:16.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353836633834363564313332373866323535373361363633343930 Oct 2 19:32:16.367000 audit[2158]: AVC avc: denied { bpf } for pid=2158 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.367000 audit[2158]: AVC avc: denied { bpf } for pid=2158 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.367000 audit[2158]: AVC avc: denied { bpf } for pid=2158 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.367000 audit[2158]: AVC avc: denied { perfmon } for pid=2158 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.367000 audit[2158]: AVC avc: denied { perfmon } for pid=2158 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.367000 audit[2158]: AVC avc: denied { perfmon } for pid=2158 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.367000 audit[2158]: AVC avc: denied { perfmon } for pid=2158 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.367000 audit[2158]: AVC avc: denied { perfmon } for pid=2158 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.367000 audit[2158]: AVC avc: denied { bpf } for pid=2158 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.374000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.374000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.374000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.374000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.374000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.374000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.374000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.374000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.374000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.378000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.378000 audit: BPF prog-id=74 op=LOAD Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { bpf } for pid=2175 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit[2175]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c00019fc48 a2=10 a3=1c items=0 ppid=2159 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:16.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039306130373365646532633665376139396234393334613439613662 Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { perfmon } for pid=2175 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit[2175]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c00019f6b0 a2=3c a3=c items=0 ppid=2159 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:16.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039306130373365646532633665376139396234393334613439613662 Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { bpf } for pid=2175 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { bpf } for pid=2175 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { bpf } for pid=2175 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { perfmon } for pid=2175 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { perfmon } for pid=2175 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { perfmon } for pid=2175 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { perfmon } for pid=2175 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { perfmon } for pid=2175 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { bpf } for pid=2175 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { bpf } for pid=2175 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit: BPF prog-id=75 op=LOAD Oct 2 19:32:16.383000 audit[2175]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00019f9d8 a2=78 a3=c0001e7210 items=0 ppid=2159 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:16.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039306130373365646532633665376139396234393334613439613662 Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { bpf } for pid=2175 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { bpf } for pid=2175 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { perfmon } for pid=2175 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { perfmon } for pid=2175 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { perfmon } for pid=2175 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { perfmon } for pid=2175 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { perfmon } for pid=2175 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { bpf } for pid=2175 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit[2175]: AVC avc: denied { bpf } for pid=2175 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.383000 audit[2175]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00019f770 a2=78 a3=c0001e7258 items=0 ppid=2159 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:16.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039306130373365646532633665376139396234393334613439613662 Oct 2 19:32:16.388000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:32:16.388000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:32:16.388000 audit[2175]: AVC avc: denied { bpf } for pid=2175 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.388000 audit[2175]: AVC avc: denied { bpf } for pid=2175 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.388000 audit[2175]: AVC avc: denied { bpf } for pid=2175 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.388000 audit[2175]: AVC avc: denied { perfmon } for pid=2175 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.388000 audit[2175]: AVC avc: denied { perfmon } for pid=2175 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.388000 audit[2175]: AVC avc: denied { perfmon } for pid=2175 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.388000 audit[2175]: AVC avc: denied { perfmon } for pid=2175 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.388000 audit[2175]: AVC avc: denied { perfmon } for pid=2175 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.388000 audit[2175]: AVC avc: denied { bpf } for pid=2175 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.388000 audit[2175]: AVC avc: denied { bpf } for pid=2175 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.388000 audit: BPF prog-id=78 op=LOAD Oct 2 19:32:16.388000 audit[2175]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00019fc30 a2=78 a3=c0001e7668 items=0 ppid=2159 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:16.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039306130373365646532633665376139396234393334613439613662 Oct 2 19:32:16.367000 audit[2158]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c0000a0340 items=0 ppid=2142 pid=2158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:16.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353836633834363564313332373866323535373361363633343930 Oct 2 19:32:16.388000 audit[2158]: AVC avc: denied { bpf } for pid=2158 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.388000 audit[2158]: AVC avc: denied { bpf } for pid=2158 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.388000 audit[2158]: AVC avc: denied { perfmon } for pid=2158 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.388000 audit[2158]: AVC avc: denied { perfmon } for pid=2158 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.388000 audit[2158]: AVC avc: denied { perfmon } for pid=2158 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.388000 audit[2158]: AVC avc: denied { perfmon } for pid=2158 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.388000 audit[2158]: AVC avc: denied { perfmon } for pid=2158 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.388000 audit[2158]: AVC avc: denied { bpf } for pid=2158 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.388000 audit[2158]: AVC avc: denied { bpf } for pid=2158 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.388000 audit: BPF prog-id=79 op=LOAD Oct 2 19:32:16.388000 audit[2158]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c0000a0388 items=0 ppid=2142 pid=2158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:16.388000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353836633834363564313332373866323535373361363633343930 Oct 2 19:32:16.388000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:32:16.389000 audit: BPF prog-id=77 op=UNLOAD Oct 2 19:32:16.389000 audit[2158]: AVC avc: denied { bpf } for pid=2158 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.389000 audit[2158]: AVC avc: denied { bpf } for pid=2158 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.389000 audit[2158]: AVC avc: denied { bpf } for pid=2158 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.389000 audit[2158]: AVC avc: denied { perfmon } for pid=2158 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.389000 audit[2158]: AVC avc: denied { perfmon } for pid=2158 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.389000 audit[2158]: AVC avc: denied { perfmon } for pid=2158 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.389000 audit[2158]: AVC avc: denied { perfmon } for pid=2158 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.389000 audit[2158]: AVC avc: denied { perfmon } for pid=2158 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.389000 audit[2158]: AVC avc: denied { bpf } for pid=2158 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.389000 audit[2158]: AVC avc: denied { bpf } for pid=2158 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:16.389000 audit: BPF prog-id=80 op=LOAD Oct 2 19:32:16.389000 audit[2158]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c0000a0798 items=0 ppid=2142 pid=2158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:16.389000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731353836633834363564313332373866323535373361363633343930 Oct 2 19:32:16.423291 env[1632]: time="2023-10-02T19:32:16.422144832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wgbh5,Uid:089188a3-d007-498e-a895-4cb65e5c142c,Namespace:kube-system,Attempt:0,} returns sandbox id \"71586c8465d13278f25573a6634905175b65a9afdbaf21c619ee862f04572b6d\"" Oct 2 19:32:16.425433 env[1632]: time="2023-10-02T19:32:16.425396894Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.2\"" Oct 2 19:32:16.427675 env[1632]: time="2023-10-02T19:32:16.427635007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lgvxf,Uid:a1eeba4e-db02-4287-bf8e-d8bd41c720f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\"" Oct 2 19:32:17.245407 kubelet[2091]: E1002 19:32:17.245353 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:18.048825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3261091464.mount: Deactivated successfully. Oct 2 19:32:18.246531 kubelet[2091]: E1002 19:32:18.246458 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:18.858651 env[1632]: time="2023-10-02T19:32:18.858595248Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:18.860933 env[1632]: time="2023-10-02T19:32:18.860888316Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:18.863679 env[1632]: time="2023-10-02T19:32:18.863597428Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:18.865381 env[1632]: time="2023-10-02T19:32:18.865342849Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:18.865983 env[1632]: time="2023-10-02T19:32:18.865951467Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.2\" returns image reference \"sha256:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0\"" Oct 2 19:32:18.867928 env[1632]: time="2023-10-02T19:32:18.867899043Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 19:32:18.869825 env[1632]: time="2023-10-02T19:32:18.869789549Z" level=info msg="CreateContainer within sandbox \"71586c8465d13278f25573a6634905175b65a9afdbaf21c619ee862f04572b6d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:32:18.908099 env[1632]: time="2023-10-02T19:32:18.907967038Z" level=info msg="CreateContainer within sandbox \"71586c8465d13278f25573a6634905175b65a9afdbaf21c619ee862f04572b6d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4a8f83510cabf2fba731237f600808fcfe4df06cf983956e032002cc64f42fb3\"" Oct 2 19:32:18.909000 env[1632]: time="2023-10-02T19:32:18.908949803Z" level=info msg="StartContainer for \"4a8f83510cabf2fba731237f600808fcfe4df06cf983956e032002cc64f42fb3\"" Oct 2 19:32:18.947261 systemd[1]: Started cri-containerd-4a8f83510cabf2fba731237f600808fcfe4df06cf983956e032002cc64f42fb3.scope. Oct 2 19:32:18.965000 audit[2227]: AVC avc: denied { perfmon } for pid=2227 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.965000 audit[2227]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001476b0 a2=3c a3=8 items=0 ppid=2142 pid=2227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:18.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461386638333531306361626632666261373331323337663630303830 Oct 2 19:32:18.965000 audit[2227]: AVC avc: denied { bpf } for pid=2227 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.965000 audit[2227]: AVC avc: denied { bpf } for pid=2227 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.965000 audit[2227]: AVC avc: denied { bpf } for pid=2227 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.965000 audit[2227]: AVC avc: denied { perfmon } for pid=2227 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.965000 audit[2227]: AVC avc: denied { perfmon } for pid=2227 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.965000 audit[2227]: AVC avc: denied { perfmon } for pid=2227 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.965000 audit[2227]: AVC avc: denied { perfmon } for pid=2227 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.965000 audit[2227]: AVC avc: denied { perfmon } for pid=2227 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.965000 audit[2227]: AVC avc: denied { bpf } for pid=2227 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.965000 audit[2227]: AVC avc: denied { bpf } for pid=2227 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.965000 audit: BPF prog-id=81 op=LOAD Oct 2 19:32:18.965000 audit[2227]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001479d8 a2=78 a3=c0002e2070 items=0 ppid=2142 pid=2227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:18.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461386638333531306361626632666261373331323337663630303830 Oct 2 19:32:18.966000 audit[2227]: AVC avc: denied { bpf } for pid=2227 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.966000 audit[2227]: AVC avc: denied { bpf } for pid=2227 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.966000 audit[2227]: AVC avc: denied { perfmon } for pid=2227 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.966000 audit[2227]: AVC avc: denied { perfmon } for pid=2227 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.966000 audit[2227]: AVC avc: denied { perfmon } for pid=2227 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.966000 audit[2227]: AVC avc: denied { perfmon } for pid=2227 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.966000 audit[2227]: AVC avc: denied { perfmon } for pid=2227 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.966000 audit[2227]: AVC avc: denied { bpf } for pid=2227 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.966000 audit[2227]: AVC avc: denied { bpf } for pid=2227 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.966000 audit: BPF prog-id=82 op=LOAD Oct 2 19:32:18.966000 audit[2227]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000147770 a2=78 a3=c0002e20b8 items=0 ppid=2142 pid=2227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:18.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461386638333531306361626632666261373331323337663630303830 Oct 2 19:32:18.966000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:32:18.966000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:32:18.966000 audit[2227]: AVC avc: denied { bpf } for pid=2227 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.966000 audit[2227]: AVC avc: denied { bpf } for pid=2227 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.966000 audit[2227]: AVC avc: denied { bpf } for pid=2227 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.966000 audit[2227]: AVC avc: denied { perfmon } for pid=2227 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.966000 audit[2227]: AVC avc: denied { perfmon } for pid=2227 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.966000 audit[2227]: AVC avc: denied { perfmon } for pid=2227 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.966000 audit[2227]: AVC avc: denied { perfmon } for pid=2227 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.966000 audit[2227]: AVC avc: denied { perfmon } for pid=2227 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.966000 audit[2227]: AVC avc: denied { bpf } for pid=2227 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.966000 audit[2227]: AVC avc: denied { bpf } for pid=2227 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:18.966000 audit: BPF prog-id=83 op=LOAD Oct 2 19:32:18.966000 audit[2227]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000147c30 a2=78 a3=c0002e2148 items=0 ppid=2142 pid=2227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:18.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461386638333531306361626632666261373331323337663630303830 Oct 2 19:32:18.998708 env[1632]: time="2023-10-02T19:32:18.998654165Z" level=info msg="StartContainer for \"4a8f83510cabf2fba731237f600808fcfe4df06cf983956e032002cc64f42fb3\" returns successfully" Oct 2 19:32:19.100000 audit[2277]: NETFILTER_CFG table=mangle:14 family=2 entries=1 op=nft_register_chain pid=2277 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.100000 audit[2277]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3eb9c5a0 a2=0 a3=7ffc3eb9c58c items=0 ppid=2237 pid=2277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.100000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:32:19.103000 audit[2278]: NETFILTER_CFG table=mangle:15 family=10 entries=1 op=nft_register_chain pid=2278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.103000 audit[2278]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1eb6f4a0 a2=0 a3=7ffe1eb6f48c items=0 ppid=2237 pid=2278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.103000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:32:19.106000 audit[2279]: NETFILTER_CFG table=nat:16 family=10 entries=1 op=nft_register_chain pid=2279 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.106000 audit[2279]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcde6ba190 a2=0 a3=7ffcde6ba17c items=0 ppid=2237 pid=2279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.106000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:32:19.108000 audit[2280]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_chain pid=2280 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.108000 audit[2280]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc5b8d3f0 a2=0 a3=7ffdc5b8d3dc items=0 ppid=2237 pid=2280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.108000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:32:19.110000 audit[2281]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2281 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.110000 audit[2281]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe33e595c0 a2=0 a3=7ffe33e595ac items=0 ppid=2237 pid=2281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.110000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:32:19.112000 audit[2282]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_chain pid=2282 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.112000 audit[2282]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce8047f10 a2=0 a3=7ffce8047efc items=0 ppid=2237 pid=2282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.112000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:32:19.203000 audit[2283]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_chain pid=2283 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.203000 audit[2283]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe5c936fe0 a2=0 a3=7ffe5c936fcc items=0 ppid=2237 pid=2283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.203000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:32:19.207000 audit[2285]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2285 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.207000 audit[2285]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdba3aa6e0 a2=0 a3=7ffdba3aa6cc items=0 ppid=2237 pid=2285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.207000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:32:19.212000 audit[2288]: NETFILTER_CFG table=filter:22 family=2 entries=2 op=nft_register_chain pid=2288 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.212000 audit[2288]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd74a71600 a2=0 a3=7ffd74a715ec items=0 ppid=2237 pid=2288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.212000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:32:19.213000 audit[2289]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_chain pid=2289 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.213000 audit[2289]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef18d7f50 a2=0 a3=7ffef18d7f3c items=0 ppid=2237 pid=2289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.213000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:32:19.216000 audit[2291]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=2291 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.216000 audit[2291]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd41cb5e10 a2=0 a3=7ffd41cb5dfc items=0 ppid=2237 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.216000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:32:19.218000 audit[2292]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_chain pid=2292 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.218000 audit[2292]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe110642b0 a2=0 a3=7ffe1106429c items=0 ppid=2237 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.218000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:32:19.222000 audit[2294]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=2294 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.222000 audit[2294]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe41fb1b50 a2=0 a3=7ffe41fb1b3c items=0 ppid=2237 pid=2294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.222000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:32:19.234000 audit[2297]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_rule pid=2297 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.234000 audit[2297]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff28b060f0 a2=0 a3=7fff28b060dc items=0 ppid=2237 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.234000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:32:19.238000 audit[2298]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2298 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.238000 audit[2298]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2f9eac10 a2=0 a3=7ffc2f9eabfc items=0 ppid=2237 pid=2298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.238000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:32:19.242000 audit[2300]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2300 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.242000 audit[2300]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffff70c2490 a2=0 a3=7ffff70c247c items=0 ppid=2237 pid=2300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.242000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:32:19.243000 audit[2301]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=2301 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.243000 audit[2301]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa8ce4d20 a2=0 a3=7fffa8ce4d0c items=0 ppid=2237 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.243000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:32:19.247957 kubelet[2091]: E1002 19:32:19.247861 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:19.255000 audit[2303]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_rule pid=2303 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.255000 audit[2303]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffc9ce7900 a2=0 a3=7fffc9ce78ec items=0 ppid=2237 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.255000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:32:19.262000 audit[2306]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=2306 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.262000 audit[2306]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc45b25530 a2=0 a3=7ffc45b2551c items=0 ppid=2237 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.262000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:32:19.266000 audit[2309]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2309 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.266000 audit[2309]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc6d7e9e40 a2=0 a3=7ffc6d7e9e2c items=0 ppid=2237 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.266000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:32:19.268000 audit[2310]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2310 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.268000 audit[2310]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd5c99aad0 a2=0 a3=7ffd5c99aabc items=0 ppid=2237 pid=2310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.268000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:32:19.271000 audit[2312]: NETFILTER_CFG table=nat:35 family=2 entries=2 op=nft_register_chain pid=2312 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.271000 audit[2312]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff4352a840 a2=0 a3=7fff4352a82c items=0 ppid=2237 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.271000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:32:19.307000 audit[2317]: NETFILTER_CFG table=nat:36 family=2 entries=2 op=nft_register_chain pid=2317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.307000 audit[2317]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc84635160 a2=0 a3=7ffc8463514c items=0 ppid=2237 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.307000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:32:19.309000 audit[2318]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=2318 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.309000 audit[2318]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3c3eb350 a2=0 a3=7ffc3c3eb33c items=0 ppid=2237 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.309000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:32:19.312000 audit[2320]: NETFILTER_CFG table=nat:38 family=2 entries=2 op=nft_register_chain pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:32:19.312000 audit[2320]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff676798e0 a2=0 a3=7fff676798cc items=0 ppid=2237 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.312000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:32:19.332000 audit[2326]: NETFILTER_CFG table=filter:39 family=2 entries=8 op=nft_register_rule pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:32:19.332000 audit[2326]: SYSCALL arch=c000003e syscall=46 success=yes exit=4956 a0=3 a1=7ffe26d89690 a2=0 a3=7ffe26d8967c items=0 ppid=2237 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.332000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:32:19.367000 audit[2326]: NETFILTER_CFG table=nat:40 family=2 entries=14 op=nft_register_chain pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:32:19.367000 audit[2326]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffe26d89690 a2=0 a3=7ffe26d8967c items=0 ppid=2237 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.367000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:32:19.371000 audit[2332]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2332 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.371000 audit[2332]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe759eb060 a2=0 a3=7ffe759eb04c items=0 ppid=2237 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.371000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:32:19.385000 audit[2335]: NETFILTER_CFG table=filter:42 family=10 entries=2 op=nft_register_chain pid=2335 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.385000 audit[2335]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffca23ba2f0 a2=0 a3=7ffca23ba2dc items=0 ppid=2237 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.385000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:32:19.391000 audit[2338]: NETFILTER_CFG table=filter:43 family=10 entries=2 op=nft_register_chain pid=2338 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.391000 audit[2338]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd067252e0 a2=0 a3=7ffd067252cc items=0 ppid=2237 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.391000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:32:19.397000 audit[2339]: NETFILTER_CFG table=filter:44 family=10 entries=1 op=nft_register_chain pid=2339 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.397000 audit[2339]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7a8cd2c0 a2=0 a3=7ffc7a8cd2ac items=0 ppid=2237 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.397000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:32:19.404000 audit[2341]: NETFILTER_CFG table=filter:45 family=10 entries=1 op=nft_register_rule pid=2341 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.404000 audit[2341]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdfeb715c0 a2=0 a3=7ffdfeb715ac items=0 ppid=2237 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.404000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:32:19.406000 audit[2342]: NETFILTER_CFG table=filter:46 family=10 entries=1 op=nft_register_chain pid=2342 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.406000 audit[2342]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffceb948c80 a2=0 a3=7ffceb948c6c items=0 ppid=2237 pid=2342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.406000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:32:19.410000 audit[2344]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_rule pid=2344 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.410000 audit[2344]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff62bc6dc0 a2=0 a3=7fff62bc6dac items=0 ppid=2237 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.410000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:32:19.416000 audit[2347]: NETFILTER_CFG table=filter:48 family=10 entries=2 op=nft_register_chain pid=2347 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.416000 audit[2347]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffffddd5110 a2=0 a3=7ffffddd50fc items=0 ppid=2237 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.416000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:32:19.419000 audit[2348]: NETFILTER_CFG table=filter:49 family=10 entries=1 op=nft_register_chain pid=2348 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.419000 audit[2348]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffccaead990 a2=0 a3=7ffccaead97c items=0 ppid=2237 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.419000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:32:19.422000 audit[2350]: NETFILTER_CFG table=filter:50 family=10 entries=1 op=nft_register_rule pid=2350 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.422000 audit[2350]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc8124e630 a2=0 a3=7ffc8124e61c items=0 ppid=2237 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.422000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:32:19.424000 audit[2351]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=2351 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.424000 audit[2351]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeac1b32b0 a2=0 a3=7ffeac1b329c items=0 ppid=2237 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.424000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:32:19.430000 audit[2353]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_rule pid=2353 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.430000 audit[2353]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd70a0d4d0 a2=0 a3=7ffd70a0d4bc items=0 ppid=2237 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.430000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:32:19.435000 audit[2356]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_rule pid=2356 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.435000 audit[2356]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd237f6a20 a2=0 a3=7ffd237f6a0c items=0 ppid=2237 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.435000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:32:19.440000 audit[2359]: NETFILTER_CFG table=filter:54 family=10 entries=1 op=nft_register_rule pid=2359 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.440000 audit[2359]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeebb17130 a2=0 a3=7ffeebb1711c items=0 ppid=2237 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.440000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:32:19.444000 audit[2360]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.444000 audit[2360]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff954fae70 a2=0 a3=7fff954fae5c items=0 ppid=2237 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.444000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:32:19.450000 audit[2362]: NETFILTER_CFG table=nat:56 family=10 entries=2 op=nft_register_chain pid=2362 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.450000 audit[2362]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd51038240 a2=0 a3=7ffd5103822c items=0 ppid=2237 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.450000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:32:19.467000 audit[2365]: NETFILTER_CFG table=nat:57 family=10 entries=2 op=nft_register_chain pid=2365 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.467000 audit[2365]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff16bb9810 a2=0 a3=7fff16bb97fc items=0 ppid=2237 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.467000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:32:19.469000 audit[2366]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=2366 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.469000 audit[2366]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeff455110 a2=0 a3=7ffeff4550fc items=0 ppid=2237 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.469000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:32:19.477000 audit[2368]: NETFILTER_CFG table=nat:59 family=10 entries=2 op=nft_register_chain pid=2368 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.477000 audit[2368]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd8cbe8980 a2=0 a3=7ffd8cbe896c items=0 ppid=2237 pid=2368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.477000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:32:19.479000 audit[2369]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=2369 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.479000 audit[2369]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc488eafc0 a2=0 a3=7ffc488eafac items=0 ppid=2237 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.479000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:32:19.482000 audit[2371]: NETFILTER_CFG table=filter:61 family=10 entries=1 op=nft_register_rule pid=2371 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.482000 audit[2371]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdb05e7410 a2=0 a3=7ffdb05e73fc items=0 ppid=2237 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.482000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:32:19.487000 audit[2374]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_rule pid=2374 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:32:19.487000 audit[2374]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd62940540 a2=0 a3=7ffd6294052c items=0 ppid=2237 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.487000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:32:19.490000 audit[2376]: NETFILTER_CFG table=filter:63 family=10 entries=3 op=nft_register_rule pid=2376 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:32:19.490000 audit[2376]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffc2bf8f260 a2=0 a3=7ffc2bf8f24c items=0 ppid=2237 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.490000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:32:19.491000 audit[2376]: NETFILTER_CFG table=nat:64 family=10 entries=7 op=nft_register_chain pid=2376 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:32:19.491000 audit[2376]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffc2bf8f260 a2=0 a3=7ffc2bf8f24c items=0 ppid=2237 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:19.491000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:32:19.894970 systemd[1]: run-containerd-runc-k8s.io-4a8f83510cabf2fba731237f600808fcfe4df06cf983956e032002cc64f42fb3-runc.ul5ZJ8.mount: Deactivated successfully. Oct 2 19:32:20.249913 kubelet[2091]: E1002 19:32:20.249799 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:21.251081 kubelet[2091]: E1002 19:32:21.251003 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:22.251459 kubelet[2091]: E1002 19:32:22.251364 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:23.252539 kubelet[2091]: E1002 19:32:23.252483 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:24.253405 kubelet[2091]: E1002 19:32:24.253365 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:25.259476 kubelet[2091]: E1002 19:32:25.259136 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:25.735700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3988708071.mount: Deactivated successfully. Oct 2 19:32:26.260247 kubelet[2091]: E1002 19:32:26.260187 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:27.261149 kubelet[2091]: E1002 19:32:27.261103 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:28.262120 kubelet[2091]: E1002 19:32:28.262021 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:29.263275 kubelet[2091]: E1002 19:32:29.263208 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:29.767169 env[1632]: time="2023-10-02T19:32:29.767101713Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:29.769592 env[1632]: time="2023-10-02T19:32:29.769549696Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:29.772184 env[1632]: time="2023-10-02T19:32:29.772142187Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:29.772944 env[1632]: time="2023-10-02T19:32:29.772866885Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Oct 2 19:32:29.775687 env[1632]: time="2023-10-02T19:32:29.775652886Z" level=info msg="CreateContainer within sandbox \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:32:29.789383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1095318791.mount: Deactivated successfully. Oct 2 19:32:29.797820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3152017967.mount: Deactivated successfully. Oct 2 19:32:29.805933 env[1632]: time="2023-10-02T19:32:29.805878602Z" level=info msg="CreateContainer within sandbox \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa\"" Oct 2 19:32:29.806788 env[1632]: time="2023-10-02T19:32:29.806752484Z" level=info msg="StartContainer for \"a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa\"" Oct 2 19:32:29.828855 systemd[1]: Started cri-containerd-a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa.scope. Oct 2 19:32:29.845534 systemd[1]: cri-containerd-a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa.scope: Deactivated successfully. Oct 2 19:32:30.263974 kubelet[2091]: E1002 19:32:30.263937 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:30.279633 env[1632]: time="2023-10-02T19:32:30.279571204Z" level=info msg="shim disconnected" id=a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa Oct 2 19:32:30.279633 env[1632]: time="2023-10-02T19:32:30.279631114Z" level=warning msg="cleaning up after shim disconnected" id=a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa namespace=k8s.io Oct 2 19:32:30.279886 env[1632]: time="2023-10-02T19:32:30.279643571Z" level=info msg="cleaning up dead shim" Oct 2 19:32:30.314053 env[1632]: time="2023-10-02T19:32:30.313997320Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:32:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2402 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:32:30Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:32:30.314436 env[1632]: time="2023-10-02T19:32:30.314309406Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 19:32:30.314635 env[1632]: time="2023-10-02T19:32:30.314590255Z" level=error msg="Failed to pipe stderr of container \"a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa\"" error="reading from a closed fifo" Oct 2 19:32:30.314770 env[1632]: time="2023-10-02T19:32:30.314735194Z" level=error msg="Failed to pipe stdout of container \"a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa\"" error="reading from a closed fifo" Oct 2 19:32:30.323523 env[1632]: time="2023-10-02T19:32:30.323442365Z" level=error msg="StartContainer for \"a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:32:30.323812 kubelet[2091]: E1002 19:32:30.323790 2091 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa" Oct 2 19:32:30.323963 kubelet[2091]: E1002 19:32:30.323945 2091 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:32:30.323963 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:32:30.323963 kubelet[2091]: rm /hostbin/cilium-mount Oct 2 19:32:30.324120 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2sjj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:32:30.324120 kubelet[2091]: E1002 19:32:30.324006 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:32:30.482671 env[1632]: time="2023-10-02T19:32:30.482317478Z" level=info msg="CreateContainer within sandbox \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:32:30.487642 kubelet[2091]: I1002 19:32:30.487539 2091 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wgbh5" podStartSLOduration=14.044815435 podCreationTimestamp="2023-10-02 19:32:14 +0000 UTC" firstStartedPulling="2023-10-02 19:32:16.424104151 +0000 UTC m=+3.911383606" lastFinishedPulling="2023-10-02 19:32:18.866755975 +0000 UTC m=+6.354035439" observedRunningTime="2023-10-02 19:32:19.461277032 +0000 UTC m=+6.948556502" watchObservedRunningTime="2023-10-02 19:32:30.487467268 +0000 UTC m=+17.974746739" Oct 2 19:32:30.522112 env[1632]: time="2023-10-02T19:32:30.521025990Z" level=info msg="CreateContainer within sandbox \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f\"" Oct 2 19:32:30.522852 env[1632]: time="2023-10-02T19:32:30.522815009Z" level=info msg="StartContainer for \"4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f\"" Oct 2 19:32:30.542379 systemd[1]: Started cri-containerd-4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f.scope. Oct 2 19:32:30.558619 systemd[1]: cri-containerd-4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f.scope: Deactivated successfully. Oct 2 19:32:30.576183 env[1632]: time="2023-10-02T19:32:30.576090605Z" level=info msg="shim disconnected" id=4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f Oct 2 19:32:30.576183 env[1632]: time="2023-10-02T19:32:30.576182375Z" level=warning msg="cleaning up after shim disconnected" id=4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f namespace=k8s.io Oct 2 19:32:30.576483 env[1632]: time="2023-10-02T19:32:30.576195110Z" level=info msg="cleaning up dead shim" Oct 2 19:32:30.585686 env[1632]: time="2023-10-02T19:32:30.585620627Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:32:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2441 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:32:30Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:32:30.585979 env[1632]: time="2023-10-02T19:32:30.585915249Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 19:32:30.586341 env[1632]: time="2023-10-02T19:32:30.586296010Z" level=error msg="Failed to pipe stdout of container \"4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f\"" error="reading from a closed fifo" Oct 2 19:32:30.586524 env[1632]: time="2023-10-02T19:32:30.586473169Z" level=error msg="Failed to pipe stderr of container \"4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f\"" error="reading from a closed fifo" Oct 2 19:32:30.588713 env[1632]: time="2023-10-02T19:32:30.588667359Z" level=error msg="StartContainer for \"4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:32:30.588959 kubelet[2091]: E1002 19:32:30.588938 2091 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f" Oct 2 19:32:30.589075 kubelet[2091]: E1002 19:32:30.589062 2091 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:32:30.589075 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:32:30.589075 kubelet[2091]: rm /hostbin/cilium-mount Oct 2 19:32:30.589075 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2sjj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:32:30.589545 kubelet[2091]: E1002 19:32:30.589113 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:32:30.784958 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa-rootfs.mount: Deactivated successfully. Oct 2 19:32:31.264927 kubelet[2091]: E1002 19:32:31.264877 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:31.473846 kubelet[2091]: I1002 19:32:31.473817 2091 scope.go:117] "RemoveContainer" containerID="a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa" Oct 2 19:32:31.474512 kubelet[2091]: I1002 19:32:31.474493 2091 scope.go:117] "RemoveContainer" containerID="a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa" Oct 2 19:32:31.475844 env[1632]: time="2023-10-02T19:32:31.475810679Z" level=info msg="RemoveContainer for \"a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa\"" Oct 2 19:32:31.476733 env[1632]: time="2023-10-02T19:32:31.476700840Z" level=info msg="RemoveContainer for \"a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa\"" Oct 2 19:32:31.476932 env[1632]: time="2023-10-02T19:32:31.476897170Z" level=error msg="RemoveContainer for \"a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa\" failed" error="failed to set removing state for container \"a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa\": container is already in removing state" Oct 2 19:32:31.477283 kubelet[2091]: E1002 19:32:31.477262 2091 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa\": container is already in removing state" containerID="a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa" Oct 2 19:32:31.477388 kubelet[2091]: E1002 19:32:31.477315 2091 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa": container is already in removing state; Skipping pod "cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8)" Oct 2 19:32:31.478450 kubelet[2091]: E1002 19:32:31.477820 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8)\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:32:31.480825 env[1632]: time="2023-10-02T19:32:31.480791303Z" level=info msg="RemoveContainer for \"a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa\" returns successfully" Oct 2 19:32:32.265206 kubelet[2091]: E1002 19:32:32.265155 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:32.484149 kubelet[2091]: E1002 19:32:32.484093 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8)\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:32:33.241008 kubelet[2091]: E1002 19:32:33.240966 2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:33.266736 kubelet[2091]: E1002 19:32:33.266356 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:33.387423 kubelet[2091]: W1002 19:32:33.387380 2091 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1eeba4e_db02_4287_bf8e_d8bd41c720f8.slice/cri-containerd-a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa.scope WatchSource:0}: container "a5170afb9c3f1b6522778e9db93562eb1d86717cfe9a64ee078f8712307bceaa" in namespace "k8s.io": not found Oct 2 19:32:34.266852 kubelet[2091]: E1002 19:32:34.266810 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:35.266932 kubelet[2091]: E1002 19:32:35.266901 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:36.004367 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 2 19:32:36.012445 kernel: kauditd_printk_skb: 311 callbacks suppressed Oct 2 19:32:36.012647 kernel: audit: type=1131 audit(1696275156.003:713): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:36.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:32:36.027000 audit: BPF prog-id=71 op=UNLOAD Oct 2 19:32:36.027000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:32:36.032352 kernel: audit: type=1334 audit(1696275156.027:714): prog-id=71 op=UNLOAD Oct 2 19:32:36.032546 kernel: audit: type=1334 audit(1696275156.027:715): prog-id=70 op=UNLOAD Oct 2 19:32:36.032593 kernel: audit: type=1334 audit(1696275156.027:716): prog-id=69 op=UNLOAD Oct 2 19:32:36.027000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:32:36.268282 kubelet[2091]: E1002 19:32:36.268061 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:36.497664 kubelet[2091]: W1002 19:32:36.497596 2091 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1eeba4e_db02_4287_bf8e_d8bd41c720f8.slice/cri-containerd-4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f.scope WatchSource:0}: task 4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f not found: not found Oct 2 19:32:37.269084 kubelet[2091]: E1002 19:32:37.269033 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:38.269535 kubelet[2091]: E1002 19:32:38.269488 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:39.270636 kubelet[2091]: E1002 19:32:39.270585 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:40.271605 kubelet[2091]: E1002 19:32:40.271564 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:41.272262 kubelet[2091]: E1002 19:32:41.272213 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:42.272728 kubelet[2091]: E1002 19:32:42.272683 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:43.273595 kubelet[2091]: E1002 19:32:43.273532 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:43.427694 env[1632]: time="2023-10-02T19:32:43.427563205Z" level=info msg="CreateContainer within sandbox \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:32:43.440761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3742665923.mount: Deactivated successfully. Oct 2 19:32:43.452369 env[1632]: time="2023-10-02T19:32:43.452315465Z" level=info msg="CreateContainer within sandbox \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8\"" Oct 2 19:32:43.453417 env[1632]: time="2023-10-02T19:32:43.453380027Z" level=info msg="StartContainer for \"ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8\"" Oct 2 19:32:43.486685 systemd[1]: Started cri-containerd-ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8.scope. Oct 2 19:32:43.501333 systemd[1]: cri-containerd-ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8.scope: Deactivated successfully. Oct 2 19:32:43.528458 env[1632]: time="2023-10-02T19:32:43.528110822Z" level=info msg="shim disconnected" id=ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8 Oct 2 19:32:43.528458 env[1632]: time="2023-10-02T19:32:43.528379760Z" level=warning msg="cleaning up after shim disconnected" id=ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8 namespace=k8s.io Oct 2 19:32:43.528458 env[1632]: time="2023-10-02T19:32:43.528392646Z" level=info msg="cleaning up dead shim" Oct 2 19:32:43.542144 env[1632]: time="2023-10-02T19:32:43.542070283Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:32:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2479 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:32:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:32:43.542432 env[1632]: time="2023-10-02T19:32:43.542368734Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 19:32:43.545365 env[1632]: time="2023-10-02T19:32:43.545210948Z" level=error msg="Failed to pipe stdout of container \"ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8\"" error="reading from a closed fifo" Oct 2 19:32:43.547639 env[1632]: time="2023-10-02T19:32:43.547589703Z" level=error msg="Failed to pipe stderr of container \"ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8\"" error="reading from a closed fifo" Oct 2 19:32:43.551005 env[1632]: time="2023-10-02T19:32:43.550786044Z" level=error msg="StartContainer for \"ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:32:43.551543 kubelet[2091]: E1002 19:32:43.551513 2091 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8" Oct 2 19:32:43.551730 kubelet[2091]: E1002 19:32:43.551642 2091 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:32:43.551730 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:32:43.551730 kubelet[2091]: rm /hostbin/cilium-mount Oct 2 19:32:43.551730 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2sjj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:32:43.551974 kubelet[2091]: E1002 19:32:43.551759 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:32:44.273973 kubelet[2091]: E1002 19:32:44.273936 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:44.436412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8-rootfs.mount: Deactivated successfully. Oct 2 19:32:44.514146 kubelet[2091]: I1002 19:32:44.513973 2091 scope.go:117] "RemoveContainer" containerID="4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f" Oct 2 19:32:44.515236 kubelet[2091]: I1002 19:32:44.515203 2091 scope.go:117] "RemoveContainer" containerID="4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f" Oct 2 19:32:44.521380 env[1632]: time="2023-10-02T19:32:44.521332398Z" level=info msg="RemoveContainer for \"4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f\"" Oct 2 19:32:44.522449 env[1632]: time="2023-10-02T19:32:44.522415943Z" level=info msg="RemoveContainer for \"4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f\"" Oct 2 19:32:44.522570 env[1632]: time="2023-10-02T19:32:44.522511721Z" level=error msg="RemoveContainer for \"4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f\" failed" error="failed to set removing state for container \"4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f\": container is already in removing state" Oct 2 19:32:44.523659 kubelet[2091]: E1002 19:32:44.523637 2091 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f\": container is already in removing state" containerID="4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f" Oct 2 19:32:44.524645 kubelet[2091]: E1002 19:32:44.524033 2091 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f": container is already in removing state; Skipping pod "cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8)" Oct 2 19:32:44.524645 kubelet[2091]: E1002 19:32:44.524581 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8)\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:32:44.534978 env[1632]: time="2023-10-02T19:32:44.534924754Z" level=info msg="RemoveContainer for \"4d5e87bbf7c1902d3d17a45cde868c71d60b82c8c4c5bd777b54404f292bad8f\" returns successfully" Oct 2 19:32:45.274100 kubelet[2091]: E1002 19:32:45.274046 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:46.274610 kubelet[2091]: E1002 19:32:46.274557 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:46.635929 kubelet[2091]: W1002 19:32:46.635486 2091 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1eeba4e_db02_4287_bf8e_d8bd41c720f8.slice/cri-containerd-ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8.scope WatchSource:0}: task ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8 not found: not found Oct 2 19:32:47.275217 kubelet[2091]: E1002 19:32:47.275163 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:48.275619 kubelet[2091]: E1002 19:32:48.275569 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:49.276545 kubelet[2091]: E1002 19:32:49.276494 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:50.277679 kubelet[2091]: E1002 19:32:50.277639 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:50.762101 update_engine[1624]: I1002 19:32:50.762039 1624 update_attempter.cc:505] Updating boot flags... Oct 2 19:32:51.278832 kubelet[2091]: E1002 19:32:51.278765 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:52.279667 kubelet[2091]: E1002 19:32:52.279584 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:53.241157 kubelet[2091]: E1002 19:32:53.241060 2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:53.280190 kubelet[2091]: E1002 19:32:53.280103 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:54.281330 kubelet[2091]: E1002 19:32:54.281282 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:55.281875 kubelet[2091]: E1002 19:32:55.281833 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:55.426420 kubelet[2091]: E1002 19:32:55.426383 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8)\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:32:56.282594 kubelet[2091]: E1002 19:32:56.282544 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:57.283493 kubelet[2091]: E1002 19:32:57.283441 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:58.283774 kubelet[2091]: E1002 19:32:58.283723 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:59.284445 kubelet[2091]: E1002 19:32:59.284402 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:00.285400 kubelet[2091]: E1002 19:33:00.285348 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:01.285542 kubelet[2091]: E1002 19:33:01.285471 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:02.286286 kubelet[2091]: E1002 19:33:02.286234 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:03.288190 kubelet[2091]: E1002 19:33:03.287529 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:04.288899 kubelet[2091]: E1002 19:33:04.288851 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:05.289577 kubelet[2091]: E1002 19:33:05.289528 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:06.290691 kubelet[2091]: E1002 19:33:06.290640 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:07.291047 kubelet[2091]: E1002 19:33:07.291002 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:08.292070 kubelet[2091]: E1002 19:33:08.292037 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:09.293174 kubelet[2091]: E1002 19:33:09.293099 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:10.294296 kubelet[2091]: E1002 19:33:10.294230 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:10.427366 env[1632]: time="2023-10-02T19:33:10.427329131Z" level=info msg="CreateContainer within sandbox \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:33:10.442668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount785158932.mount: Deactivated successfully. Oct 2 19:33:10.449375 env[1632]: time="2023-10-02T19:33:10.449321467Z" level=info msg="CreateContainer within sandbox \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5\"" Oct 2 19:33:10.450775 env[1632]: time="2023-10-02T19:33:10.450730481Z" level=info msg="StartContainer for \"0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5\"" Oct 2 19:33:10.474776 systemd[1]: Started cri-containerd-0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5.scope. Oct 2 19:33:10.495273 systemd[1]: cri-containerd-0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5.scope: Deactivated successfully. Oct 2 19:33:10.517153 env[1632]: time="2023-10-02T19:33:10.517078174Z" level=info msg="shim disconnected" id=0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5 Oct 2 19:33:10.517153 env[1632]: time="2023-10-02T19:33:10.517152618Z" level=warning msg="cleaning up after shim disconnected" id=0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5 namespace=k8s.io Oct 2 19:33:10.517489 env[1632]: time="2023-10-02T19:33:10.517165288Z" level=info msg="cleaning up dead shim" Oct 2 19:33:10.527700 env[1632]: time="2023-10-02T19:33:10.527638944Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:33:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2701 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:33:10Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:33:10.527988 env[1632]: time="2023-10-02T19:33:10.527924435Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:33:10.528236 env[1632]: time="2023-10-02T19:33:10.528189774Z" level=error msg="Failed to pipe stdout of container \"0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5\"" error="reading from a closed fifo" Oct 2 19:33:10.529440 env[1632]: time="2023-10-02T19:33:10.529397523Z" level=error msg="Failed to pipe stderr of container \"0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5\"" error="reading from a closed fifo" Oct 2 19:33:10.531491 env[1632]: time="2023-10-02T19:33:10.531446994Z" level=error msg="StartContainer for \"0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:33:10.532327 kubelet[2091]: E1002 19:33:10.531702 2091 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5" Oct 2 19:33:10.532327 kubelet[2091]: E1002 19:33:10.531814 2091 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:33:10.532327 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:33:10.532327 kubelet[2091]: rm /hostbin/cilium-mount Oct 2 19:33:10.532327 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2sjj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:33:10.532327 kubelet[2091]: E1002 19:33:10.531850 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:33:10.567768 kubelet[2091]: I1002 19:33:10.566443 2091 scope.go:117] "RemoveContainer" containerID="ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8" Oct 2 19:33:10.567768 kubelet[2091]: I1002 19:33:10.567267 2091 scope.go:117] "RemoveContainer" containerID="ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8" Oct 2 19:33:10.569237 env[1632]: time="2023-10-02T19:33:10.569200558Z" level=info msg="RemoveContainer for \"ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8\"" Oct 2 19:33:10.569770 env[1632]: time="2023-10-02T19:33:10.569637765Z" level=info msg="RemoveContainer for \"ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8\"" Oct 2 19:33:10.569879 env[1632]: time="2023-10-02T19:33:10.569836532Z" level=error msg="RemoveContainer for \"ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8\" failed" error="failed to set removing state for container \"ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8\": container is already in removing state" Oct 2 19:33:10.570031 kubelet[2091]: E1002 19:33:10.570007 2091 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8\": container is already in removing state" containerID="ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8" Oct 2 19:33:10.570161 kubelet[2091]: E1002 19:33:10.570048 2091 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8": container is already in removing state; Skipping pod "cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8)" Oct 2 19:33:10.570486 kubelet[2091]: E1002 19:33:10.570469 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8)\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:33:10.573997 env[1632]: time="2023-10-02T19:33:10.573953993Z" level=info msg="RemoveContainer for \"ff61ab19346ce8d9ead57c7a89caa9821f7b99d198e370c41d747de003d8abc8\" returns successfully" Oct 2 19:33:11.295182 kubelet[2091]: E1002 19:33:11.295138 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:11.436577 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5-rootfs.mount: Deactivated successfully. Oct 2 19:33:12.296111 kubelet[2091]: E1002 19:33:12.296070 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:13.240856 kubelet[2091]: E1002 19:33:13.240810 2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:13.297207 kubelet[2091]: E1002 19:33:13.297157 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:13.622550 kubelet[2091]: W1002 19:33:13.622180 2091 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1eeba4e_db02_4287_bf8e_d8bd41c720f8.slice/cri-containerd-0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5.scope WatchSource:0}: task 0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5 not found: not found Oct 2 19:33:14.298268 kubelet[2091]: E1002 19:33:14.298227 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:15.299144 kubelet[2091]: E1002 19:33:15.299095 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:16.300267 kubelet[2091]: E1002 19:33:16.300214 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:17.300400 kubelet[2091]: E1002 19:33:17.300343 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:18.301382 kubelet[2091]: E1002 19:33:18.301338 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:19.302237 kubelet[2091]: E1002 19:33:19.302035 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:20.303255 kubelet[2091]: E1002 19:33:20.303201 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:21.303418 kubelet[2091]: E1002 19:33:21.303368 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:22.304190 kubelet[2091]: E1002 19:33:22.304145 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:22.425670 kubelet[2091]: E1002 19:33:22.425635 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8)\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:33:23.304315 kubelet[2091]: E1002 19:33:23.304287 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:24.304649 kubelet[2091]: E1002 19:33:24.304606 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:25.305627 kubelet[2091]: E1002 19:33:25.305582 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:26.306538 kubelet[2091]: E1002 19:33:26.306496 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:27.308371 kubelet[2091]: E1002 19:33:27.308329 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:28.309495 kubelet[2091]: E1002 19:33:28.309451 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:29.310412 kubelet[2091]: E1002 19:33:29.310369 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:30.310775 kubelet[2091]: E1002 19:33:30.310723 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:31.311257 kubelet[2091]: E1002 19:33:31.311213 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:32.311534 kubelet[2091]: E1002 19:33:32.311428 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:33.240470 kubelet[2091]: E1002 19:33:33.240419 2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:33.311674 kubelet[2091]: E1002 19:33:33.311633 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:33.425728 kubelet[2091]: E1002 19:33:33.425229 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8)\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:33:34.312755 kubelet[2091]: E1002 19:33:34.312699 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:35.313478 kubelet[2091]: E1002 19:33:35.313433 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:36.313928 kubelet[2091]: E1002 19:33:36.313882 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:37.314942 kubelet[2091]: E1002 19:33:37.314894 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:38.315834 kubelet[2091]: E1002 19:33:38.315794 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:39.316608 kubelet[2091]: E1002 19:33:39.316527 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:40.317121 kubelet[2091]: E1002 19:33:40.317071 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:41.317521 kubelet[2091]: E1002 19:33:41.317466 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:42.317996 kubelet[2091]: E1002 19:33:42.317943 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:43.318746 kubelet[2091]: E1002 19:33:43.318714 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:44.319118 kubelet[2091]: E1002 19:33:44.319068 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:45.320195 kubelet[2091]: E1002 19:33:45.320150 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:46.321243 kubelet[2091]: E1002 19:33:46.321194 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:47.321868 kubelet[2091]: E1002 19:33:47.321826 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:47.424615 kubelet[2091]: E1002 19:33:47.424581 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8)\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:33:48.322241 kubelet[2091]: E1002 19:33:48.322192 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:49.322402 kubelet[2091]: E1002 19:33:49.322337 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:50.322766 kubelet[2091]: E1002 19:33:50.322715 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:51.322890 kubelet[2091]: E1002 19:33:51.322837 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:52.323566 kubelet[2091]: E1002 19:33:52.323518 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:53.240983 kubelet[2091]: E1002 19:33:53.240941 2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:53.323993 kubelet[2091]: E1002 19:33:53.323892 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:54.324691 kubelet[2091]: E1002 19:33:54.324618 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:55.325102 kubelet[2091]: E1002 19:33:55.325059 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:56.325938 kubelet[2091]: E1002 19:33:56.325891 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:57.326376 kubelet[2091]: E1002 19:33:57.326331 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:58.326513 kubelet[2091]: E1002 19:33:58.326459 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:59.327199 kubelet[2091]: E1002 19:33:59.327168 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:00.328339 kubelet[2091]: E1002 19:34:00.328288 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:01.329517 kubelet[2091]: E1002 19:34:01.329464 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:02.330350 kubelet[2091]: E1002 19:34:02.330302 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:02.427368 env[1632]: time="2023-10-02T19:34:02.427319690Z" level=info msg="CreateContainer within sandbox \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:34:02.449089 env[1632]: time="2023-10-02T19:34:02.449029785Z" level=info msg="CreateContainer within sandbox \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488\"" Oct 2 19:34:02.451812 env[1632]: time="2023-10-02T19:34:02.451760886Z" level=info msg="StartContainer for \"11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488\"" Oct 2 19:34:02.488507 systemd[1]: Started cri-containerd-11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488.scope. Oct 2 19:34:02.500498 systemd[1]: cri-containerd-11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488.scope: Deactivated successfully. Oct 2 19:34:02.506231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488-rootfs.mount: Deactivated successfully. Oct 2 19:34:02.519542 env[1632]: time="2023-10-02T19:34:02.519391346Z" level=info msg="shim disconnected" id=11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488 Oct 2 19:34:02.519542 env[1632]: time="2023-10-02T19:34:02.519523897Z" level=warning msg="cleaning up after shim disconnected" id=11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488 namespace=k8s.io Oct 2 19:34:02.519542 env[1632]: time="2023-10-02T19:34:02.519540407Z" level=info msg="cleaning up dead shim" Oct 2 19:34:02.552688 env[1632]: time="2023-10-02T19:34:02.552642029Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:34:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2744 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:34:02Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:34:02.553010 env[1632]: time="2023-10-02T19:34:02.552938201Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:34:02.554373 env[1632]: time="2023-10-02T19:34:02.554320318Z" level=error msg="Failed to pipe stdout of container \"11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488\"" error="reading from a closed fifo" Oct 2 19:34:02.556282 env[1632]: time="2023-10-02T19:34:02.556226880Z" level=error msg="Failed to pipe stderr of container \"11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488\"" error="reading from a closed fifo" Oct 2 19:34:02.558590 env[1632]: time="2023-10-02T19:34:02.558438029Z" level=error msg="StartContainer for \"11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:34:02.558813 kubelet[2091]: E1002 19:34:02.558787 2091 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488" Oct 2 19:34:02.558939 kubelet[2091]: E1002 19:34:02.558926 2091 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:34:02.558939 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:34:02.558939 kubelet[2091]: rm /hostbin/cilium-mount Oct 2 19:34:02.558939 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2sjj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:34:02.559237 kubelet[2091]: E1002 19:34:02.559047 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:34:02.665219 kubelet[2091]: I1002 19:34:02.665092 2091 scope.go:117] "RemoveContainer" containerID="0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5" Oct 2 19:34:02.666923 kubelet[2091]: I1002 19:34:02.666589 2091 scope.go:117] "RemoveContainer" containerID="0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5" Oct 2 19:34:02.669401 env[1632]: time="2023-10-02T19:34:02.669362072Z" level=info msg="RemoveContainer for \"0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5\"" Oct 2 19:34:02.670704 env[1632]: time="2023-10-02T19:34:02.670625102Z" level=info msg="RemoveContainer for \"0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5\"" Oct 2 19:34:02.671071 env[1632]: time="2023-10-02T19:34:02.671024683Z" level=error msg="RemoveContainer for \"0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5\" failed" error="failed to set removing state for container \"0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5\": container is already in removing state" Oct 2 19:34:02.672545 kubelet[2091]: E1002 19:34:02.671766 2091 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5\": container is already in removing state" containerID="0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5" Oct 2 19:34:02.672545 kubelet[2091]: E1002 19:34:02.671815 2091 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5": container is already in removing state; Skipping pod "cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8)" Oct 2 19:34:02.672545 kubelet[2091]: E1002 19:34:02.672257 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8)\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:34:02.682734 env[1632]: time="2023-10-02T19:34:02.682640631Z" level=info msg="RemoveContainer for \"0e42e545ddda4338ed6feef9001005944648cac837d0a5c16ad27935b83065e5\" returns successfully" Oct 2 19:34:03.331178 kubelet[2091]: E1002 19:34:03.331114 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:04.332258 kubelet[2091]: E1002 19:34:04.332205 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:05.332969 kubelet[2091]: E1002 19:34:05.332914 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:05.630045 kubelet[2091]: W1002 19:34:05.629926 2091 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1eeba4e_db02_4287_bf8e_d8bd41c720f8.slice/cri-containerd-11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488.scope WatchSource:0}: task 11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488 not found: not found Oct 2 19:34:06.333600 kubelet[2091]: E1002 19:34:06.333556 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:07.333807 kubelet[2091]: E1002 19:34:07.333757 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:08.335000 kubelet[2091]: E1002 19:34:08.334947 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:09.335230 kubelet[2091]: E1002 19:34:09.335204 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:10.335765 kubelet[2091]: E1002 19:34:10.335716 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:11.336033 kubelet[2091]: E1002 19:34:11.335989 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:12.336254 kubelet[2091]: E1002 19:34:12.336205 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:13.241337 kubelet[2091]: E1002 19:34:13.241288 2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:13.310483 kubelet[2091]: E1002 19:34:13.310424 2091 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Oct 2 19:34:13.337359 kubelet[2091]: E1002 19:34:13.337318 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:13.386551 kubelet[2091]: E1002 19:34:13.386514 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:14.338500 kubelet[2091]: E1002 19:34:14.338445 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:15.338879 kubelet[2091]: E1002 19:34:15.338832 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:16.339939 kubelet[2091]: E1002 19:34:16.339882 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:17.340244 kubelet[2091]: E1002 19:34:17.340205 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:17.424471 kubelet[2091]: E1002 19:34:17.424436 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8)\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:34:18.340806 kubelet[2091]: E1002 19:34:18.340758 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:18.387683 kubelet[2091]: E1002 19:34:18.387647 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:19.341112 kubelet[2091]: E1002 19:34:19.341064 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:20.341296 kubelet[2091]: E1002 19:34:20.341246 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:21.341731 kubelet[2091]: E1002 19:34:21.341679 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:22.341877 kubelet[2091]: E1002 19:34:22.341840 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:23.342098 kubelet[2091]: E1002 19:34:23.342042 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:23.388224 kubelet[2091]: E1002 19:34:23.388178 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:24.343027 kubelet[2091]: E1002 19:34:24.342986 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:25.343466 kubelet[2091]: E1002 19:34:25.343416 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:26.344381 kubelet[2091]: E1002 19:34:26.344333 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:27.345394 kubelet[2091]: E1002 19:34:27.345336 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:28.345763 kubelet[2091]: E1002 19:34:28.345708 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:28.389909 kubelet[2091]: E1002 19:34:28.389880 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:29.346686 kubelet[2091]: E1002 19:34:29.346636 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:30.347694 kubelet[2091]: E1002 19:34:30.347651 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:31.348713 kubelet[2091]: E1002 19:34:31.348671 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:32.349070 kubelet[2091]: E1002 19:34:32.349016 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:32.425268 kubelet[2091]: E1002 19:34:32.425220 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8)\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:34:33.241413 kubelet[2091]: E1002 19:34:33.241370 2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:33.349969 kubelet[2091]: E1002 19:34:33.349917 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:33.390358 kubelet[2091]: E1002 19:34:33.390312 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:34.350491 kubelet[2091]: E1002 19:34:34.350435 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:35.351581 kubelet[2091]: E1002 19:34:35.351523 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:36.352947 kubelet[2091]: E1002 19:34:36.352900 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:37.353145 kubelet[2091]: E1002 19:34:37.353087 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:38.354009 kubelet[2091]: E1002 19:34:38.353958 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:38.391902 kubelet[2091]: E1002 19:34:38.391866 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:39.354629 kubelet[2091]: E1002 19:34:39.354555 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:40.355218 kubelet[2091]: E1002 19:34:40.355168 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:41.355910 kubelet[2091]: E1002 19:34:41.355686 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:42.356526 kubelet[2091]: E1002 19:34:42.356473 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:43.356954 kubelet[2091]: E1002 19:34:43.356912 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:43.392397 kubelet[2091]: E1002 19:34:43.392356 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:44.357170 kubelet[2091]: E1002 19:34:44.357107 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:44.425402 kubelet[2091]: E1002 19:34:44.425356 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8)\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:34:45.357909 kubelet[2091]: E1002 19:34:45.357857 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:46.358709 kubelet[2091]: E1002 19:34:46.358657 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:47.359462 kubelet[2091]: E1002 19:34:47.359381 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:48.360381 kubelet[2091]: E1002 19:34:48.360334 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:48.393581 kubelet[2091]: E1002 19:34:48.393551 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:49.360868 kubelet[2091]: E1002 19:34:49.360814 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:50.361860 kubelet[2091]: E1002 19:34:50.361822 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:51.362379 kubelet[2091]: E1002 19:34:51.362333 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:52.363236 kubelet[2091]: E1002 19:34:52.363194 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:53.240737 kubelet[2091]: E1002 19:34:53.240686 2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:53.364238 kubelet[2091]: E1002 19:34:53.364202 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:53.394328 kubelet[2091]: E1002 19:34:53.394298 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:54.365375 kubelet[2091]: E1002 19:34:54.365315 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:55.366574 kubelet[2091]: E1002 19:34:55.366520 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:56.366950 kubelet[2091]: E1002 19:34:56.366908 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:57.367248 kubelet[2091]: E1002 19:34:57.367199 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:58.367566 kubelet[2091]: E1002 19:34:58.367524 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:58.395286 kubelet[2091]: E1002 19:34:58.395259 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:58.425113 kubelet[2091]: E1002 19:34:58.425071 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8)\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:34:59.368147 kubelet[2091]: E1002 19:34:59.368087 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:00.369288 kubelet[2091]: E1002 19:35:00.369229 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:01.369890 kubelet[2091]: E1002 19:35:01.369845 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:02.370869 kubelet[2091]: E1002 19:35:02.370756 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:03.371521 kubelet[2091]: E1002 19:35:03.371471 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:03.396002 kubelet[2091]: E1002 19:35:03.395964 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:04.372229 kubelet[2091]: E1002 19:35:04.372178 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:05.372545 kubelet[2091]: E1002 19:35:05.372381 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:06.373171 kubelet[2091]: E1002 19:35:06.373115 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:07.373921 kubelet[2091]: E1002 19:35:07.373871 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:08.374256 kubelet[2091]: E1002 19:35:08.374210 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:08.397439 kubelet[2091]: E1002 19:35:08.397410 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:09.375203 kubelet[2091]: E1002 19:35:09.375155 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:10.375645 kubelet[2091]: E1002 19:35:10.375597 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:11.376642 kubelet[2091]: E1002 19:35:11.376592 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:12.377383 kubelet[2091]: E1002 19:35:12.377333 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:13.240646 kubelet[2091]: E1002 19:35:13.240599 2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:13.378044 kubelet[2091]: E1002 19:35:13.378009 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:13.398864 kubelet[2091]: E1002 19:35:13.398829 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:13.425578 kubelet[2091]: E1002 19:35:13.425347 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8)\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:35:14.378753 kubelet[2091]: E1002 19:35:14.378704 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:15.379229 kubelet[2091]: E1002 19:35:15.379177 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:16.379888 kubelet[2091]: E1002 19:35:16.379840 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:17.380203 kubelet[2091]: E1002 19:35:17.380150 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:18.380874 kubelet[2091]: E1002 19:35:18.380825 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:18.399793 kubelet[2091]: E1002 19:35:18.399752 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:19.382027 kubelet[2091]: E1002 19:35:19.381974 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:20.383059 kubelet[2091]: E1002 19:35:20.383009 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:21.383415 kubelet[2091]: E1002 19:35:21.383361 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:22.384070 kubelet[2091]: E1002 19:35:22.384018 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:23.384532 kubelet[2091]: E1002 19:35:23.384482 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:23.400585 kubelet[2091]: E1002 19:35:23.400552 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:24.385588 kubelet[2091]: E1002 19:35:24.385539 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:25.385971 kubelet[2091]: E1002 19:35:25.385919 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:25.429184 env[1632]: time="2023-10-02T19:35:25.428988133Z" level=info msg="CreateContainer within sandbox \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 2 19:35:25.465072 env[1632]: time="2023-10-02T19:35:25.465008327Z" level=info msg="CreateContainer within sandbox \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"fe21c5a358f109358c024157a31aa17b1300d242cf470f74fbc61e637317d251\"" Oct 2 19:35:25.465808 env[1632]: time="2023-10-02T19:35:25.465768478Z" level=info msg="StartContainer for \"fe21c5a358f109358c024157a31aa17b1300d242cf470f74fbc61e637317d251\"" Oct 2 19:35:25.497161 systemd[1]: Started cri-containerd-fe21c5a358f109358c024157a31aa17b1300d242cf470f74fbc61e637317d251.scope. Oct 2 19:35:25.515205 systemd[1]: cri-containerd-fe21c5a358f109358c024157a31aa17b1300d242cf470f74fbc61e637317d251.scope: Deactivated successfully. Oct 2 19:35:25.521178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe21c5a358f109358c024157a31aa17b1300d242cf470f74fbc61e637317d251-rootfs.mount: Deactivated successfully. Oct 2 19:35:25.537693 env[1632]: time="2023-10-02T19:35:25.537629192Z" level=info msg="shim disconnected" id=fe21c5a358f109358c024157a31aa17b1300d242cf470f74fbc61e637317d251 Oct 2 19:35:25.537693 env[1632]: time="2023-10-02T19:35:25.537691255Z" level=warning msg="cleaning up after shim disconnected" id=fe21c5a358f109358c024157a31aa17b1300d242cf470f74fbc61e637317d251 namespace=k8s.io Oct 2 19:35:25.538064 env[1632]: time="2023-10-02T19:35:25.537703814Z" level=info msg="cleaning up dead shim" Oct 2 19:35:25.548269 env[1632]: time="2023-10-02T19:35:25.548211429Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:35:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2793 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:35:25Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fe21c5a358f109358c024157a31aa17b1300d242cf470f74fbc61e637317d251/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:35:25.548562 env[1632]: time="2023-10-02T19:35:25.548500725Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:35:25.548734 env[1632]: time="2023-10-02T19:35:25.548696347Z" level=error msg="Failed to pipe stdout of container \"fe21c5a358f109358c024157a31aa17b1300d242cf470f74fbc61e637317d251\"" error="reading from a closed fifo" Oct 2 19:35:25.549154 env[1632]: time="2023-10-02T19:35:25.548830657Z" level=error msg="Failed to pipe stderr of container \"fe21c5a358f109358c024157a31aa17b1300d242cf470f74fbc61e637317d251\"" error="reading from a closed fifo" Oct 2 19:35:25.550648 env[1632]: time="2023-10-02T19:35:25.550597870Z" level=error msg="StartContainer for \"fe21c5a358f109358c024157a31aa17b1300d242cf470f74fbc61e637317d251\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:35:25.550997 kubelet[2091]: E1002 19:35:25.550879 2091 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="fe21c5a358f109358c024157a31aa17b1300d242cf470f74fbc61e637317d251" Oct 2 19:35:25.551432 kubelet[2091]: E1002 19:35:25.551409 2091 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:35:25.551432 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:35:25.551432 kubelet[2091]: rm /hostbin/cilium-mount Oct 2 19:35:25.551432 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2sjj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:35:25.551766 kubelet[2091]: E1002 19:35:25.551496 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:35:25.831864 kubelet[2091]: I1002 19:35:25.831838 2091 scope.go:117] "RemoveContainer" containerID="11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488" Oct 2 19:35:25.832890 kubelet[2091]: I1002 19:35:25.832865 2091 scope.go:117] "RemoveContainer" containerID="11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488" Oct 2 19:35:25.839906 env[1632]: time="2023-10-02T19:35:25.839722617Z" level=info msg="RemoveContainer for \"11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488\"" Oct 2 19:35:25.842992 env[1632]: time="2023-10-02T19:35:25.842952089Z" level=info msg="RemoveContainer for \"11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488\"" Oct 2 19:35:25.843704 env[1632]: time="2023-10-02T19:35:25.843660267Z" level=error msg="RemoveContainer for \"11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488\" failed" error="failed to set removing state for container \"11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488\": container is already in removing state" Oct 2 19:35:25.844599 kubelet[2091]: E1002 19:35:25.844447 2091 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488\": container is already in removing state" containerID="11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488" Oct 2 19:35:25.845304 kubelet[2091]: E1002 19:35:25.845222 2091 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488": container is already in removing state; Skipping pod "cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8)" Oct 2 19:35:25.847390 kubelet[2091]: E1002 19:35:25.847370 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-lgvxf_kube-system(a1eeba4e-db02-4287-bf8e-d8bd41c720f8)\"" pod="kube-system/cilium-lgvxf" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" Oct 2 19:35:25.853087 env[1632]: time="2023-10-02T19:35:25.853027151Z" level=info msg="RemoveContainer for \"11a85c2fa826f849b2188ddc23e167af2fa9be5cc1024a955f4688e506e75488\" returns successfully" Oct 2 19:35:26.386983 kubelet[2091]: E1002 19:35:26.386931 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:27.387487 kubelet[2091]: E1002 19:35:27.387437 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:28.234457 env[1632]: time="2023-10-02T19:35:28.234416754Z" level=info msg="StopPodSandbox for \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\"" Oct 2 19:35:28.235341 env[1632]: time="2023-10-02T19:35:28.235305035Z" level=info msg="Container to stop \"fe21c5a358f109358c024157a31aa17b1300d242cf470f74fbc61e637317d251\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:35:28.240874 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68-shm.mount: Deactivated successfully. Oct 2 19:35:28.250078 systemd[1]: cri-containerd-090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68.scope: Deactivated successfully. Oct 2 19:35:28.249000 audit: BPF prog-id=74 op=UNLOAD Oct 2 19:35:28.260736 kernel: audit: type=1334 audit(1696275328.249:717): prog-id=74 op=UNLOAD Oct 2 19:35:28.260000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:35:28.265089 kernel: audit: type=1334 audit(1696275328.260:718): prog-id=78 op=UNLOAD Oct 2 19:35:28.293351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68-rootfs.mount: Deactivated successfully. Oct 2 19:35:28.312367 env[1632]: time="2023-10-02T19:35:28.312312833Z" level=info msg="shim disconnected" id=090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68 Oct 2 19:35:28.312599 env[1632]: time="2023-10-02T19:35:28.312575439Z" level=warning msg="cleaning up after shim disconnected" id=090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68 namespace=k8s.io Oct 2 19:35:28.312660 env[1632]: time="2023-10-02T19:35:28.312596993Z" level=info msg="cleaning up dead shim" Oct 2 19:35:28.324467 env[1632]: time="2023-10-02T19:35:28.324412897Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:35:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2824 runtime=io.containerd.runc.v2\n" Oct 2 19:35:28.324795 env[1632]: time="2023-10-02T19:35:28.324758750Z" level=info msg="TearDown network for sandbox \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\" successfully" Oct 2 19:35:28.324942 env[1632]: time="2023-10-02T19:35:28.324792251Z" level=info msg="StopPodSandbox for \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\" returns successfully" Oct 2 19:35:28.388460 kubelet[2091]: E1002 19:35:28.388415 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:28.402392 kubelet[2091]: E1002 19:35:28.402360 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:28.480077 kubelet[2091]: I1002 19:35:28.480029 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sjj5\" (UniqueName: \"kubernetes.io/projected/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-kube-api-access-2sjj5\") pod \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " Oct 2 19:35:28.480077 kubelet[2091]: I1002 19:35:28.480081 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-cilium-cgroup\") pod \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " Oct 2 19:35:28.480332 kubelet[2091]: I1002 19:35:28.480110 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-hubble-tls\") pod \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " Oct 2 19:35:28.480332 kubelet[2091]: I1002 19:35:28.480151 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-clustermesh-secrets\") pod \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " Oct 2 19:35:28.480332 kubelet[2091]: I1002 19:35:28.480177 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-hostproc\") pod \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " Oct 2 19:35:28.480332 kubelet[2091]: I1002 19:35:28.480200 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-cni-path\") pod \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " Oct 2 19:35:28.480332 kubelet[2091]: I1002 19:35:28.480225 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-host-proc-sys-kernel\") pod \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " Oct 2 19:35:28.480332 kubelet[2091]: I1002 19:35:28.480249 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-cilium-run\") pod \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " Oct 2 19:35:28.480332 kubelet[2091]: I1002 19:35:28.480292 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-etc-cni-netd\") pod \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " Oct 2 19:35:28.480332 kubelet[2091]: I1002 19:35:28.480320 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-xtables-lock\") pod \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " Oct 2 19:35:28.480677 kubelet[2091]: I1002 19:35:28.480346 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-lib-modules\") pod \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " Oct 2 19:35:28.480677 kubelet[2091]: I1002 19:35:28.480371 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-bpf-maps\") pod \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " Oct 2 19:35:28.480677 kubelet[2091]: I1002 19:35:28.480406 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-cilium-config-path\") pod \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " Oct 2 19:35:28.480677 kubelet[2091]: I1002 19:35:28.480437 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-host-proc-sys-net\") pod \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\" (UID: \"a1eeba4e-db02-4287-bf8e-d8bd41c720f8\") " Oct 2 19:35:28.480677 kubelet[2091]: I1002 19:35:28.480497 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a1eeba4e-db02-4287-bf8e-d8bd41c720f8" (UID: "a1eeba4e-db02-4287-bf8e-d8bd41c720f8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:28.480913 kubelet[2091]: I1002 19:35:28.480887 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a1eeba4e-db02-4287-bf8e-d8bd41c720f8" (UID: "a1eeba4e-db02-4287-bf8e-d8bd41c720f8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:28.480995 kubelet[2091]: I1002 19:35:28.480927 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a1eeba4e-db02-4287-bf8e-d8bd41c720f8" (UID: "a1eeba4e-db02-4287-bf8e-d8bd41c720f8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:28.482742 kubelet[2091]: I1002 19:35:28.482693 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a1eeba4e-db02-4287-bf8e-d8bd41c720f8" (UID: "a1eeba4e-db02-4287-bf8e-d8bd41c720f8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:28.483033 kubelet[2091]: I1002 19:35:28.483013 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a1eeba4e-db02-4287-bf8e-d8bd41c720f8" (UID: "a1eeba4e-db02-4287-bf8e-d8bd41c720f8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:28.485275 kubelet[2091]: I1002 19:35:28.483189 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-hostproc" (OuterVolumeSpecName: "hostproc") pod "a1eeba4e-db02-4287-bf8e-d8bd41c720f8" (UID: "a1eeba4e-db02-4287-bf8e-d8bd41c720f8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:28.494809 kubelet[2091]: I1002 19:35:28.483214 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-cni-path" (OuterVolumeSpecName: "cni-path") pod "a1eeba4e-db02-4287-bf8e-d8bd41c720f8" (UID: "a1eeba4e-db02-4287-bf8e-d8bd41c720f8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:28.495071 kubelet[2091]: I1002 19:35:28.483239 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a1eeba4e-db02-4287-bf8e-d8bd41c720f8" (UID: "a1eeba4e-db02-4287-bf8e-d8bd41c720f8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:28.495306 kubelet[2091]: I1002 19:35:28.483263 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a1eeba4e-db02-4287-bf8e-d8bd41c720f8" (UID: "a1eeba4e-db02-4287-bf8e-d8bd41c720f8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:28.499103 systemd[1]: var-lib-kubelet-pods-a1eeba4e\x2ddb02\x2d4287\x2dbf8e\x2dd8bd41c720f8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:35:28.517892 kubelet[2091]: I1002 19:35:28.483283 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a1eeba4e-db02-4287-bf8e-d8bd41c720f8" (UID: "a1eeba4e-db02-4287-bf8e-d8bd41c720f8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:28.518021 kubelet[2091]: I1002 19:35:28.494701 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a1eeba4e-db02-4287-bf8e-d8bd41c720f8" (UID: "a1eeba4e-db02-4287-bf8e-d8bd41c720f8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:35:28.518170 kubelet[2091]: I1002 19:35:28.495896 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a1eeba4e-db02-4287-bf8e-d8bd41c720f8" (UID: "a1eeba4e-db02-4287-bf8e-d8bd41c720f8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:35:28.518310 kubelet[2091]: I1002 19:35:28.516937 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a1eeba4e-db02-4287-bf8e-d8bd41c720f8" (UID: "a1eeba4e-db02-4287-bf8e-d8bd41c720f8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:35:28.524317 systemd[1]: var-lib-kubelet-pods-a1eeba4e\x2ddb02\x2d4287\x2dbf8e\x2dd8bd41c720f8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:35:28.530569 kubelet[2091]: I1002 19:35:28.530522 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-kube-api-access-2sjj5" (OuterVolumeSpecName: "kube-api-access-2sjj5") pod "a1eeba4e-db02-4287-bf8e-d8bd41c720f8" (UID: "a1eeba4e-db02-4287-bf8e-d8bd41c720f8"). InnerVolumeSpecName "kube-api-access-2sjj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:35:28.532518 systemd[1]: var-lib-kubelet-pods-a1eeba4e\x2ddb02\x2d4287\x2dbf8e\x2dd8bd41c720f8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2sjj5.mount: Deactivated successfully. Oct 2 19:35:28.581527 kubelet[2091]: I1002 19:35:28.581486 2091 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2sjj5\" (UniqueName: \"kubernetes.io/projected/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-kube-api-access-2sjj5\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:35:28.581527 kubelet[2091]: I1002 19:35:28.581524 2091 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-cilium-cgroup\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:35:28.581527 kubelet[2091]: I1002 19:35:28.581543 2091 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-hubble-tls\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:35:28.581778 kubelet[2091]: I1002 19:35:28.581557 2091 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-clustermesh-secrets\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:35:28.581778 kubelet[2091]: I1002 19:35:28.581570 2091 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-hostproc\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:35:28.581778 kubelet[2091]: I1002 19:35:28.581583 2091 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-cni-path\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:35:28.581778 kubelet[2091]: I1002 19:35:28.581595 2091 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-host-proc-sys-kernel\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:35:28.581778 kubelet[2091]: I1002 19:35:28.581608 2091 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-cilium-run\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:35:28.581778 kubelet[2091]: I1002 19:35:28.581620 2091 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-etc-cni-netd\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:35:28.581778 kubelet[2091]: I1002 19:35:28.581631 2091 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-xtables-lock\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:35:28.581778 kubelet[2091]: I1002 19:35:28.581643 2091 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-lib-modules\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:35:28.581778 kubelet[2091]: I1002 19:35:28.581657 2091 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-bpf-maps\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:35:28.581778 kubelet[2091]: I1002 19:35:28.581671 2091 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-cilium-config-path\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:35:28.581778 kubelet[2091]: I1002 19:35:28.581689 2091 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1eeba4e-db02-4287-bf8e-d8bd41c720f8-host-proc-sys-net\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:35:28.643237 kubelet[2091]: W1002 19:35:28.643201 2091 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1eeba4e_db02_4287_bf8e_d8bd41c720f8.slice/cri-containerd-fe21c5a358f109358c024157a31aa17b1300d242cf470f74fbc61e637317d251.scope WatchSource:0}: task fe21c5a358f109358c024157a31aa17b1300d242cf470f74fbc61e637317d251 not found: not found Oct 2 19:35:28.843893 kubelet[2091]: I1002 19:35:28.843788 2091 scope.go:117] "RemoveContainer" containerID="fe21c5a358f109358c024157a31aa17b1300d242cf470f74fbc61e637317d251" Oct 2 19:35:28.847500 env[1632]: time="2023-10-02T19:35:28.846512051Z" level=info msg="RemoveContainer for \"fe21c5a358f109358c024157a31aa17b1300d242cf470f74fbc61e637317d251\"" Oct 2 19:35:28.852592 env[1632]: time="2023-10-02T19:35:28.851659786Z" level=info msg="RemoveContainer for \"fe21c5a358f109358c024157a31aa17b1300d242cf470f74fbc61e637317d251\" returns successfully" Oct 2 19:35:28.853372 systemd[1]: Removed slice kubepods-burstable-poda1eeba4e_db02_4287_bf8e_d8bd41c720f8.slice. Oct 2 19:35:29.389332 kubelet[2091]: E1002 19:35:29.389292 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:29.427489 kubelet[2091]: I1002 19:35:29.427447 2091 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" path="/var/lib/kubelet/pods/a1eeba4e-db02-4287-bf8e-d8bd41c720f8/volumes" Oct 2 19:35:30.389861 kubelet[2091]: E1002 19:35:30.389726 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:31.208707 kubelet[2091]: I1002 19:35:31.208669 2091 topology_manager.go:215] "Topology Admit Handler" podUID="339af218-a779-4826-9238-5140e0a6ecd3" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-t7db7" Oct 2 19:35:31.208921 kubelet[2091]: E1002 19:35:31.208728 2091 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" containerName="mount-cgroup" Oct 2 19:35:31.208921 kubelet[2091]: E1002 19:35:31.208755 2091 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" containerName="mount-cgroup" Oct 2 19:35:31.208921 kubelet[2091]: E1002 19:35:31.208763 2091 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" containerName="mount-cgroup" Oct 2 19:35:31.208921 kubelet[2091]: E1002 19:35:31.208771 2091 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" containerName="mount-cgroup" Oct 2 19:35:31.208921 kubelet[2091]: E1002 19:35:31.208780 2091 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" containerName="mount-cgroup" Oct 2 19:35:31.208921 kubelet[2091]: I1002 19:35:31.208799 2091 memory_manager.go:346] "RemoveStaleState removing state" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" containerName="mount-cgroup" Oct 2 19:35:31.208921 kubelet[2091]: I1002 19:35:31.208807 2091 memory_manager.go:346] "RemoveStaleState removing state" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" containerName="mount-cgroup" Oct 2 19:35:31.208921 kubelet[2091]: I1002 19:35:31.208829 2091 memory_manager.go:346] "RemoveStaleState removing state" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" containerName="mount-cgroup" Oct 2 19:35:31.208921 kubelet[2091]: I1002 19:35:31.208837 2091 memory_manager.go:346] "RemoveStaleState removing state" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" containerName="mount-cgroup" Oct 2 19:35:31.208921 kubelet[2091]: I1002 19:35:31.208845 2091 memory_manager.go:346] "RemoveStaleState removing state" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" containerName="mount-cgroup" Oct 2 19:35:31.209927 kubelet[2091]: I1002 19:35:31.209677 2091 topology_manager.go:215] "Topology Admit Handler" podUID="10a14911-e37c-41e2-896b-7bc9e69a93b3" podNamespace="kube-system" podName="cilium-dl6c9" Oct 2 19:35:31.209927 kubelet[2091]: E1002 19:35:31.209730 2091 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" containerName="mount-cgroup" Oct 2 19:35:31.209927 kubelet[2091]: I1002 19:35:31.209757 2091 memory_manager.go:346] "RemoveStaleState removing state" podUID="a1eeba4e-db02-4287-bf8e-d8bd41c720f8" containerName="mount-cgroup" Oct 2 19:35:31.221034 systemd[1]: Created slice kubepods-besteffort-pod339af218_a779_4826_9238_5140e0a6ecd3.slice. Oct 2 19:35:31.237992 systemd[1]: Created slice kubepods-burstable-pod10a14911_e37c_41e2_896b_7bc9e69a93b3.slice. Oct 2 19:35:31.390369 kubelet[2091]: E1002 19:35:31.390322 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:31.392567 kubelet[2091]: I1002 19:35:31.392536 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/10a14911-e37c-41e2-896b-7bc9e69a93b3-cilium-ipsec-secrets\") pod \"cilium-dl6c9\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " pod="kube-system/cilium-dl6c9" Oct 2 19:35:31.392778 kubelet[2091]: I1002 19:35:31.392758 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-host-proc-sys-net\") pod \"cilium-dl6c9\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " pod="kube-system/cilium-dl6c9" Oct 2 19:35:31.392874 kubelet[2091]: I1002 19:35:31.392799 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-host-proc-sys-kernel\") pod \"cilium-dl6c9\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " pod="kube-system/cilium-dl6c9" Oct 2 19:35:31.392874 kubelet[2091]: I1002 19:35:31.392831 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-etc-cni-netd\") pod \"cilium-dl6c9\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " pod="kube-system/cilium-dl6c9" Oct 2 19:35:31.392874 kubelet[2091]: I1002 19:35:31.392862 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-xtables-lock\") pod \"cilium-dl6c9\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " pod="kube-system/cilium-dl6c9" Oct 2 19:35:31.393017 kubelet[2091]: I1002 19:35:31.392890 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-cilium-cgroup\") pod \"cilium-dl6c9\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " pod="kube-system/cilium-dl6c9" Oct 2 19:35:31.393017 kubelet[2091]: I1002 19:35:31.392922 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10a14911-e37c-41e2-896b-7bc9e69a93b3-clustermesh-secrets\") pod \"cilium-dl6c9\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " pod="kube-system/cilium-dl6c9" Oct 2 19:35:31.393017 kubelet[2091]: I1002 19:35:31.392954 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10a14911-e37c-41e2-896b-7bc9e69a93b3-cilium-config-path\") pod \"cilium-dl6c9\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " pod="kube-system/cilium-dl6c9" Oct 2 19:35:31.393017 kubelet[2091]: I1002 19:35:31.392988 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/339af218-a779-4826-9238-5140e0a6ecd3-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-t7db7\" (UID: \"339af218-a779-4826-9238-5140e0a6ecd3\") " pod="kube-system/cilium-operator-6bc8ccdb58-t7db7" Oct 2 19:35:31.393238 kubelet[2091]: I1002 19:35:31.393025 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlln4\" (UniqueName: \"kubernetes.io/projected/339af218-a779-4826-9238-5140e0a6ecd3-kube-api-access-tlln4\") pod \"cilium-operator-6bc8ccdb58-t7db7\" (UID: \"339af218-a779-4826-9238-5140e0a6ecd3\") " pod="kube-system/cilium-operator-6bc8ccdb58-t7db7" Oct 2 19:35:31.393238 kubelet[2091]: I1002 19:35:31.393058 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-hostproc\") pod \"cilium-dl6c9\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " pod="kube-system/cilium-dl6c9" Oct 2 19:35:31.393238 kubelet[2091]: I1002 19:35:31.393089 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-cilium-run\") pod \"cilium-dl6c9\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " pod="kube-system/cilium-dl6c9" Oct 2 19:35:31.393238 kubelet[2091]: I1002 19:35:31.393120 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-bpf-maps\") pod \"cilium-dl6c9\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " pod="kube-system/cilium-dl6c9" Oct 2 19:35:31.393238 kubelet[2091]: I1002 19:35:31.393168 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-cni-path\") pod \"cilium-dl6c9\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " pod="kube-system/cilium-dl6c9" Oct 2 19:35:31.393238 kubelet[2091]: I1002 19:35:31.393229 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-lib-modules\") pod \"cilium-dl6c9\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " pod="kube-system/cilium-dl6c9" Oct 2 19:35:31.393512 kubelet[2091]: I1002 19:35:31.393262 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10a14911-e37c-41e2-896b-7bc9e69a93b3-hubble-tls\") pod \"cilium-dl6c9\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " pod="kube-system/cilium-dl6c9" Oct 2 19:35:31.393512 kubelet[2091]: I1002 19:35:31.393294 2091 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z4n6\" (UniqueName: \"kubernetes.io/projected/10a14911-e37c-41e2-896b-7bc9e69a93b3-kube-api-access-4z4n6\") pod \"cilium-dl6c9\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " pod="kube-system/cilium-dl6c9" Oct 2 19:35:31.531615 env[1632]: time="2023-10-02T19:35:31.530868161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-t7db7,Uid:339af218-a779-4826-9238-5140e0a6ecd3,Namespace:kube-system,Attempt:0,}" Oct 2 19:35:31.551411 env[1632]: time="2023-10-02T19:35:31.551304747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:35:31.551411 env[1632]: time="2023-10-02T19:35:31.551371761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:35:31.551411 env[1632]: time="2023-10-02T19:35:31.551387836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:35:31.552053 env[1632]: time="2023-10-02T19:35:31.552007081Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/844ca5750527052201f27faadc22a017e21edee704fd635b4c03e6a05d773f0f pid=2853 runtime=io.containerd.runc.v2 Oct 2 19:35:31.560056 env[1632]: time="2023-10-02T19:35:31.560006343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dl6c9,Uid:10a14911-e37c-41e2-896b-7bc9e69a93b3,Namespace:kube-system,Attempt:0,}" Oct 2 19:35:31.566289 systemd[1]: Started cri-containerd-844ca5750527052201f27faadc22a017e21edee704fd635b4c03e6a05d773f0f.scope. Oct 2 19:35:31.599509 kernel: audit: type=1400 audit(1696275331.587:719): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.599743 kernel: audit: type=1400 audit(1696275331.587:720): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.587000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.587000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600027 env[1632]: time="2023-10-02T19:35:31.589039684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:35:31.600027 env[1632]: time="2023-10-02T19:35:31.589158110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:35:31.600027 env[1632]: time="2023-10-02T19:35:31.589223326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:35:31.600027 env[1632]: time="2023-10-02T19:35:31.589558439Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/746d4366125475c28609d7d70d4060faad06f5ea5bf46d18caaff6bc18bc4a26 pid=2886 runtime=io.containerd.runc.v2 Oct 2 19:35:31.609669 kernel: audit: type=1400 audit(1696275331.587:721): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.587000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.587000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.620292 kernel: audit: type=1400 audit(1696275331.587:722): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.620411 kernel: audit: type=1400 audit(1696275331.587:723): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.620439 kernel: audit: type=1400 audit(1696275331.587:724): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.587000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.587000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.587000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.630152 kernel: audit: type=1400 audit(1696275331.587:725): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.635567 systemd[1]: Started cri-containerd-746d4366125475c28609d7d70d4060faad06f5ea5bf46d18caaff6bc18bc4a26.scope. Oct 2 19:35:31.587000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.587000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.588000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.588000 audit: BPF prog-id=84 op=LOAD Oct 2 19:35:31.599000 audit[2863]: AVC avc: denied { bpf } for pid=2863 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.599000 audit[2863]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000219c48 a2=10 a3=1c items=0 ppid=2853 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:35:31.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834346361353735303532373035323230316632376661616463323261 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { perfmon } for pid=2863 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0002196b0 a2=3c a3=8 items=0 ppid=2853 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:35:31.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834346361353735303532373035323230316632376661616463323261 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { bpf } for pid=2863 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { bpf } for pid=2863 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.644157 kernel: audit: type=1400 audit(1696275331.587:726): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { bpf } for pid=2863 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { perfmon } for pid=2863 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { perfmon } for pid=2863 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { perfmon } for pid=2863 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { perfmon } for pid=2863 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { perfmon } for pid=2863 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { bpf } for pid=2863 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { bpf } for pid=2863 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit: BPF prog-id=85 op=LOAD Oct 2 19:35:31.600000 audit[2863]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0002199d8 a2=78 a3=c0002a8910 items=0 ppid=2853 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:35:31.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834346361353735303532373035323230316632376661616463323261 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { bpf } for pid=2863 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { bpf } for pid=2863 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { perfmon } for pid=2863 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { perfmon } for pid=2863 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { perfmon } for pid=2863 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { perfmon } for pid=2863 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { perfmon } for pid=2863 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { bpf } for pid=2863 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { bpf } for pid=2863 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit: BPF prog-id=86 op=LOAD Oct 2 19:35:31.600000 audit[2863]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000219770 a2=78 a3=c0002a8958 items=0 ppid=2853 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:35:31.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834346361353735303532373035323230316632376661616463323261 Oct 2 19:35:31.600000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:35:31.600000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { bpf } for pid=2863 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { bpf } for pid=2863 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { bpf } for pid=2863 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { perfmon } for pid=2863 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { perfmon } for pid=2863 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { perfmon } for pid=2863 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { perfmon } for pid=2863 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { perfmon } for pid=2863 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { bpf } for pid=2863 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit[2863]: AVC avc: denied { bpf } for pid=2863 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.600000 audit: BPF prog-id=87 op=LOAD Oct 2 19:35:31.600000 audit[2863]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000219c30 a2=78 a3=c0002a8d68 items=0 ppid=2853 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:35:31.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834346361353735303532373035323230316632376661616463323261 Oct 2 19:35:31.665000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.665000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.666000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.666000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.666000 audit: BPF prog-id=88 op=LOAD Oct 2 19:35:31.667000 audit[2896]: AVC avc: denied { bpf } for pid=2896 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.667000 audit[2896]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000117c48 a2=10 a3=1c items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:35:31.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734366434333636313235343735633238363039643764373064343036 Oct 2 19:35:31.667000 audit[2896]: AVC avc: denied { perfmon } for pid=2896 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.667000 audit[2896]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001176b0 a2=3c a3=c items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:35:31.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734366434333636313235343735633238363039643764373064343036 Oct 2 19:35:31.667000 audit[2896]: AVC avc: denied { bpf } for pid=2896 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.667000 audit[2896]: AVC avc: denied { bpf } for pid=2896 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.667000 audit[2896]: AVC avc: denied { bpf } for pid=2896 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.667000 audit[2896]: AVC avc: denied { perfmon } for pid=2896 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.667000 audit[2896]: AVC avc: denied { perfmon } for pid=2896 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.667000 audit[2896]: AVC avc: denied { perfmon } for pid=2896 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.667000 audit[2896]: AVC avc: denied { perfmon } for pid=2896 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.667000 audit[2896]: AVC avc: denied { perfmon } for pid=2896 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.667000 audit[2896]: AVC avc: denied { bpf } for pid=2896 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.667000 audit[2896]: AVC avc: denied { bpf } for pid=2896 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.667000 audit: BPF prog-id=89 op=LOAD Oct 2 19:35:31.667000 audit[2896]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001179d8 a2=78 a3=c0003b6780 items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:35:31.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734366434333636313235343735633238363039643764373064343036 Oct 2 19:35:31.668000 audit[2896]: AVC avc: denied { bpf } for pid=2896 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.668000 audit[2896]: AVC avc: denied { bpf } for pid=2896 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.668000 audit[2896]: AVC avc: denied { perfmon } for pid=2896 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.668000 audit[2896]: AVC avc: denied { perfmon } for pid=2896 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.668000 audit[2896]: AVC avc: denied { perfmon } for pid=2896 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.668000 audit[2896]: AVC avc: denied { perfmon } for pid=2896 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.668000 audit[2896]: AVC avc: denied { perfmon } for pid=2896 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.668000 audit[2896]: AVC avc: denied { bpf } for pid=2896 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.668000 audit[2896]: AVC avc: denied { bpf } for pid=2896 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.668000 audit: BPF prog-id=90 op=LOAD Oct 2 19:35:31.668000 audit[2896]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000117770 a2=78 a3=c0003b67c8 items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:35:31.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734366434333636313235343735633238363039643764373064343036 Oct 2 19:35:31.668000 audit: BPF prog-id=90 op=UNLOAD Oct 2 19:35:31.668000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:35:31.668000 audit[2896]: AVC avc: denied { bpf } for pid=2896 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.668000 audit[2896]: AVC avc: denied { bpf } for pid=2896 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.668000 audit[2896]: AVC avc: denied { bpf } for pid=2896 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.668000 audit[2896]: AVC avc: denied { perfmon } for pid=2896 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.668000 audit[2896]: AVC avc: denied { perfmon } for pid=2896 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.668000 audit[2896]: AVC avc: denied { perfmon } for pid=2896 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.668000 audit[2896]: AVC avc: denied { perfmon } for pid=2896 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.668000 audit[2896]: AVC avc: denied { perfmon } for pid=2896 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.668000 audit[2896]: AVC avc: denied { bpf } for pid=2896 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.668000 audit[2896]: AVC avc: denied { bpf } for pid=2896 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:31.668000 audit: BPF prog-id=91 op=LOAD Oct 2 19:35:31.668000 audit[2896]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000117c30 a2=78 a3=c0003b6bd8 items=0 ppid=2886 pid=2896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:35:31.668000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734366434333636313235343735633238363039643764373064343036 Oct 2 19:35:31.682621 env[1632]: time="2023-10-02T19:35:31.682572710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-t7db7,Uid:339af218-a779-4826-9238-5140e0a6ecd3,Namespace:kube-system,Attempt:0,} returns sandbox id \"844ca5750527052201f27faadc22a017e21edee704fd635b4c03e6a05d773f0f\"" Oct 2 19:35:31.686161 env[1632]: time="2023-10-02T19:35:31.685031383Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 2 19:35:31.688035 env[1632]: time="2023-10-02T19:35:31.687980518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dl6c9,Uid:10a14911-e37c-41e2-896b-7bc9e69a93b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"746d4366125475c28609d7d70d4060faad06f5ea5bf46d18caaff6bc18bc4a26\"" Oct 2 19:35:31.690827 env[1632]: time="2023-10-02T19:35:31.690790331Z" level=info msg="CreateContainer within sandbox \"746d4366125475c28609d7d70d4060faad06f5ea5bf46d18caaff6bc18bc4a26\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:35:31.709324 env[1632]: time="2023-10-02T19:35:31.709170210Z" level=info msg="CreateContainer within sandbox \"746d4366125475c28609d7d70d4060faad06f5ea5bf46d18caaff6bc18bc4a26\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588\"" Oct 2 19:35:31.710272 env[1632]: time="2023-10-02T19:35:31.710245473Z" level=info msg="StartContainer for \"8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588\"" Oct 2 19:35:31.734621 systemd[1]: Started cri-containerd-8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588.scope. Oct 2 19:35:31.750192 systemd[1]: cri-containerd-8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588.scope: Deactivated successfully. Oct 2 19:35:31.781777 env[1632]: time="2023-10-02T19:35:31.781585936Z" level=info msg="shim disconnected" id=8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588 Oct 2 19:35:31.781777 env[1632]: time="2023-10-02T19:35:31.781702312Z" level=warning msg="cleaning up after shim disconnected" id=8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588 namespace=k8s.io Oct 2 19:35:31.781777 env[1632]: time="2023-10-02T19:35:31.781720198Z" level=info msg="cleaning up dead shim" Oct 2 19:35:31.794727 env[1632]: time="2023-10-02T19:35:31.794677069Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:35:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2951 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:35:31Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:35:31.795006 env[1632]: time="2023-10-02T19:35:31.794948111Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Oct 2 19:35:31.795326 env[1632]: time="2023-10-02T19:35:31.795289546Z" level=error msg="Failed to pipe stderr of container \"8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588\"" error="reading from a closed fifo" Oct 2 19:35:31.800612 env[1632]: time="2023-10-02T19:35:31.800554946Z" level=error msg="Failed to pipe stdout of container \"8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588\"" error="reading from a closed fifo" Oct 2 19:35:31.805519 env[1632]: time="2023-10-02T19:35:31.805439507Z" level=error msg="StartContainer for \"8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:35:31.805841 kubelet[2091]: E1002 19:35:31.805818 2091 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588" Oct 2 19:35:31.806474 kubelet[2091]: E1002 19:35:31.806085 2091 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:35:31.806474 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:35:31.806474 kubelet[2091]: rm /hostbin/cilium-mount Oct 2 19:35:31.806474 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4z4n6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-dl6c9_kube-system(10a14911-e37c-41e2-896b-7bc9e69a93b3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:35:31.806474 kubelet[2091]: E1002 19:35:31.806180 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dl6c9" podUID="10a14911-e37c-41e2-896b-7bc9e69a93b3" Oct 2 19:35:31.854906 env[1632]: time="2023-10-02T19:35:31.854867567Z" level=info msg="CreateContainer within sandbox \"746d4366125475c28609d7d70d4060faad06f5ea5bf46d18caaff6bc18bc4a26\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:35:31.872957 env[1632]: time="2023-10-02T19:35:31.872905363Z" level=info msg="CreateContainer within sandbox \"746d4366125475c28609d7d70d4060faad06f5ea5bf46d18caaff6bc18bc4a26\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0\"" Oct 2 19:35:31.874389 env[1632]: time="2023-10-02T19:35:31.874352751Z" level=info msg="StartContainer for \"43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0\"" Oct 2 19:35:31.902242 systemd[1]: Started cri-containerd-43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0.scope. Oct 2 19:35:31.919006 systemd[1]: cri-containerd-43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0.scope: Deactivated successfully. Oct 2 19:35:31.936596 env[1632]: time="2023-10-02T19:35:31.936533667Z" level=info msg="shim disconnected" id=43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0 Oct 2 19:35:31.936596 env[1632]: time="2023-10-02T19:35:31.936593531Z" level=warning msg="cleaning up after shim disconnected" id=43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0 namespace=k8s.io Oct 2 19:35:31.936891 env[1632]: time="2023-10-02T19:35:31.936605861Z" level=info msg="cleaning up dead shim" Oct 2 19:35:31.946749 env[1632]: time="2023-10-02T19:35:31.946697530Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:35:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2988 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:35:31Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:35:31.948219 env[1632]: time="2023-10-02T19:35:31.948140689Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Oct 2 19:35:31.952371 env[1632]: time="2023-10-02T19:35:31.952300946Z" level=error msg="Failed to pipe stdout of container \"43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0\"" error="reading from a closed fifo" Oct 2 19:35:31.952803 env[1632]: time="2023-10-02T19:35:31.952569551Z" level=error msg="Failed to pipe stderr of container \"43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0\"" error="reading from a closed fifo" Oct 2 19:35:31.954568 env[1632]: time="2023-10-02T19:35:31.954523062Z" level=error msg="StartContainer for \"43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:35:31.954823 kubelet[2091]: E1002 19:35:31.954799 2091 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0" Oct 2 19:35:31.954965 kubelet[2091]: E1002 19:35:31.954945 2091 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:35:31.954965 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:35:31.954965 kubelet[2091]: rm /hostbin/cilium-mount Oct 2 19:35:31.954965 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4z4n6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-dl6c9_kube-system(10a14911-e37c-41e2-896b-7bc9e69a93b3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:35:31.955250 kubelet[2091]: E1002 19:35:31.955002 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dl6c9" podUID="10a14911-e37c-41e2-896b-7bc9e69a93b3" Oct 2 19:35:32.391275 kubelet[2091]: E1002 19:35:32.391230 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:32.857693 kubelet[2091]: I1002 19:35:32.857656 2091 scope.go:117] "RemoveContainer" containerID="8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588" Oct 2 19:35:32.858229 kubelet[2091]: I1002 19:35:32.858209 2091 scope.go:117] "RemoveContainer" containerID="8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588" Oct 2 19:35:32.859832 env[1632]: time="2023-10-02T19:35:32.859789909Z" level=info msg="RemoveContainer for \"8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588\"" Oct 2 19:35:32.860780 env[1632]: time="2023-10-02T19:35:32.860642112Z" level=info msg="RemoveContainer for \"8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588\"" Oct 2 19:35:32.860883 env[1632]: time="2023-10-02T19:35:32.860846042Z" level=error msg="RemoveContainer for \"8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588\" failed" error="failed to set removing state for container \"8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588\": container is already in removing state" Oct 2 19:35:32.861023 kubelet[2091]: E1002 19:35:32.861001 2091 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588\": container is already in removing state" containerID="8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588" Oct 2 19:35:32.861101 kubelet[2091]: E1002 19:35:32.861039 2091 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588": container is already in removing state; Skipping pod "cilium-dl6c9_kube-system(10a14911-e37c-41e2-896b-7bc9e69a93b3)" Oct 2 19:35:32.861518 kubelet[2091]: E1002 19:35:32.861499 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-dl6c9_kube-system(10a14911-e37c-41e2-896b-7bc9e69a93b3)\"" pod="kube-system/cilium-dl6c9" podUID="10a14911-e37c-41e2-896b-7bc9e69a93b3" Oct 2 19:35:32.863909 env[1632]: time="2023-10-02T19:35:32.863805606Z" level=info msg="RemoveContainer for \"8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588\" returns successfully" Oct 2 19:35:33.020190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3861795609.mount: Deactivated successfully. Oct 2 19:35:33.241459 kubelet[2091]: E1002 19:35:33.241419 2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:33.392269 kubelet[2091]: E1002 19:35:33.392196 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:33.403480 kubelet[2091]: E1002 19:35:33.403419 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:33.939652 env[1632]: time="2023-10-02T19:35:33.939596907Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:35:33.942254 env[1632]: time="2023-10-02T19:35:33.942201657Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:35:33.944423 env[1632]: time="2023-10-02T19:35:33.944385822Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:35:33.945047 env[1632]: time="2023-10-02T19:35:33.945009316Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Oct 2 19:35:33.947480 env[1632]: time="2023-10-02T19:35:33.947311649Z" level=info msg="CreateContainer within sandbox \"844ca5750527052201f27faadc22a017e21edee704fd635b4c03e6a05d773f0f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:35:33.965350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3722507922.mount: Deactivated successfully. Oct 2 19:35:33.976406 env[1632]: time="2023-10-02T19:35:33.976345866Z" level=info msg="CreateContainer within sandbox \"844ca5750527052201f27faadc22a017e21edee704fd635b4c03e6a05d773f0f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7\"" Oct 2 19:35:33.977114 env[1632]: time="2023-10-02T19:35:33.977079413Z" level=info msg="StartContainer for \"c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7\"" Oct 2 19:35:34.006307 systemd[1]: Started cri-containerd-c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7.scope. Oct 2 19:35:34.032562 kernel: kauditd_printk_skb: 106 callbacks suppressed Oct 2 19:35:34.032711 kernel: audit: type=1400 audit(1696275334.023:755): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.023000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.037229 kernel: audit: type=1400 audit(1696275334.023:756): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.023000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.041803 kernel: audit: type=1400 audit(1696275334.023:757): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.023000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.023000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.046948 kernel: audit: type=1400 audit(1696275334.023:758): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.051272 kernel: audit: type=1400 audit(1696275334.023:759): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.023000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.023000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.056231 kernel: audit: type=1400 audit(1696275334.023:760): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.023000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.061825 kernel: audit: type=1400 audit(1696275334.023:761): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.023000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.023000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.072711 kernel: audit: type=1400 audit(1696275334.023:762): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.072832 kernel: audit: type=1400 audit(1696275334.023:763): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.072867 kernel: audit: type=1400 audit(1696275334.025:764): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.025000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.025000 audit: BPF prog-id=92 op=LOAD Oct 2 19:35:34.025000 audit[3009]: AVC avc: denied { bpf } for pid=3009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.025000 audit[3009]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000147c48 a2=10 a3=1c items=0 ppid=2853 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:35:34.025000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334326533373637313663346633363166373636663966323733386134 Oct 2 19:35:34.025000 audit[3009]: AVC avc: denied { perfmon } for pid=3009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.025000 audit[3009]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001476b0 a2=3c a3=8 items=0 ppid=2853 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:35:34.025000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334326533373637313663346633363166373636663966323733386134 Oct 2 19:35:34.025000 audit[3009]: AVC avc: denied { bpf } for pid=3009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.025000 audit[3009]: AVC avc: denied { bpf } for pid=3009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.025000 audit[3009]: AVC avc: denied { bpf } for pid=3009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.025000 audit[3009]: AVC avc: denied { perfmon } for pid=3009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.025000 audit[3009]: AVC avc: denied { perfmon } for pid=3009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.025000 audit[3009]: AVC avc: denied { perfmon } for pid=3009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.025000 audit[3009]: AVC avc: denied { perfmon } for pid=3009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.025000 audit[3009]: AVC avc: denied { perfmon } for pid=3009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.025000 audit[3009]: AVC avc: denied { bpf } for pid=3009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.025000 audit[3009]: AVC avc: denied { bpf } for pid=3009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.025000 audit: BPF prog-id=93 op=LOAD Oct 2 19:35:34.025000 audit[3009]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001479d8 a2=78 a3=c00026ccf0 items=0 ppid=2853 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:35:34.025000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334326533373637313663346633363166373636663966323733386134 Oct 2 19:35:34.031000 audit[3009]: AVC avc: denied { bpf } for pid=3009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.031000 audit[3009]: AVC avc: denied { bpf } for pid=3009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.031000 audit[3009]: AVC avc: denied { perfmon } for pid=3009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.031000 audit[3009]: AVC avc: denied { perfmon } for pid=3009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.031000 audit[3009]: AVC avc: denied { perfmon } for pid=3009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.031000 audit[3009]: AVC avc: denied { perfmon } for pid=3009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.031000 audit[3009]: AVC avc: denied { perfmon } for pid=3009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.031000 audit[3009]: AVC avc: denied { bpf } for pid=3009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.031000 audit[3009]: AVC avc: denied { bpf } for pid=3009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.031000 audit: BPF prog-id=94 op=LOAD Oct 2 19:35:34.031000 audit[3009]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000147770 a2=78 a3=c00026cd38 items=0 ppid=2853 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:35:34.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334326533373637313663346633363166373636663966323733386134 Oct 2 19:35:34.031000 audit: BPF prog-id=94 op=UNLOAD Oct 2 19:35:34.031000 audit: BPF prog-id=93 op=UNLOAD Oct 2 19:35:34.031000 audit[3009]: AVC avc: denied { bpf } for pid=3009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.031000 audit[3009]: AVC avc: denied { bpf } for pid=3009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.031000 audit[3009]: AVC avc: denied { bpf } for pid=3009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.031000 audit[3009]: AVC avc: denied { perfmon } for pid=3009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.031000 audit[3009]: AVC avc: denied { perfmon } for pid=3009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.031000 audit[3009]: AVC avc: denied { perfmon } for pid=3009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.031000 audit[3009]: AVC avc: denied { perfmon } for pid=3009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.031000 audit[3009]: AVC avc: denied { perfmon } for pid=3009 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.031000 audit[3009]: AVC avc: denied { bpf } for pid=3009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.031000 audit[3009]: AVC avc: denied { bpf } for pid=3009 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:35:34.031000 audit: BPF prog-id=95 op=LOAD Oct 2 19:35:34.031000 audit[3009]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000147c30 a2=78 a3=c00026d148 items=0 ppid=2853 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:35:34.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334326533373637313663346633363166373636663966323733386134 Oct 2 19:35:34.099937 env[1632]: time="2023-10-02T19:35:34.099200202Z" level=info msg="StartContainer for \"c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7\" returns successfully" Oct 2 19:35:34.137000 audit[3020]: AVC avc: denied { map_create } for pid=3020 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c70,c560 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c70,c560 tclass=bpf permissive=0 Oct 2 19:35:34.137000 audit[3020]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c0000c97d0 a2=48 a3=c0000c97c0 items=0 ppid=2853 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c70,c560 key=(null) Oct 2 19:35:34.137000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:35:34.392414 kubelet[2091]: E1002 19:35:34.392357 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:34.889452 kubelet[2091]: W1002 19:35:34.889412 2091 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10a14911_e37c_41e2_896b_7bc9e69a93b3.slice/cri-containerd-8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588.scope WatchSource:0}: container "8ff9faccb171fb1476b6cb3523c23929c23a0f6413e7029d69ec9b1151b1f588" in namespace "k8s.io": not found Oct 2 19:35:35.393382 kubelet[2091]: E1002 19:35:35.393330 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:36.393855 kubelet[2091]: E1002 19:35:36.393804 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:37.394040 kubelet[2091]: E1002 19:35:37.393990 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:37.997507 kubelet[2091]: W1002 19:35:37.997471 2091 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10a14911_e37c_41e2_896b_7bc9e69a93b3.slice/cri-containerd-43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0.scope WatchSource:0}: task 43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0 not found: not found Oct 2 19:35:38.395140 kubelet[2091]: E1002 19:35:38.395010 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:38.404858 kubelet[2091]: E1002 19:35:38.404822 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:39.395433 kubelet[2091]: E1002 19:35:39.395382 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:40.395818 kubelet[2091]: E1002 19:35:40.395765 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:41.396916 kubelet[2091]: E1002 19:35:41.396860 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:42.397058 kubelet[2091]: E1002 19:35:42.397009 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:43.397547 kubelet[2091]: E1002 19:35:43.397504 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:43.405775 kubelet[2091]: E1002 19:35:43.405746 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:44.398567 kubelet[2091]: E1002 19:35:44.398520 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:45.399096 kubelet[2091]: E1002 19:35:45.399043 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:46.400222 kubelet[2091]: E1002 19:35:46.400171 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:47.400435 kubelet[2091]: E1002 19:35:47.400378 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:47.427374 env[1632]: time="2023-10-02T19:35:47.427316642Z" level=info msg="CreateContainer within sandbox \"746d4366125475c28609d7d70d4060faad06f5ea5bf46d18caaff6bc18bc4a26\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:35:47.454284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2979928098.mount: Deactivated successfully. Oct 2 19:35:47.455611 kubelet[2091]: I1002 19:35:47.455375 2091 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-t7db7" podStartSLOduration=14.194349568 podCreationTimestamp="2023-10-02 19:35:31 +0000 UTC" firstStartedPulling="2023-10-02 19:35:31.684376181 +0000 UTC m=+199.171655639" lastFinishedPulling="2023-10-02 19:35:33.945349394 +0000 UTC m=+201.432628850" observedRunningTime="2023-10-02 19:35:34.887654939 +0000 UTC m=+202.374934410" watchObservedRunningTime="2023-10-02 19:35:47.455322779 +0000 UTC m=+214.942602251" Oct 2 19:35:47.463499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4095940514.mount: Deactivated successfully. Oct 2 19:35:47.469879 env[1632]: time="2023-10-02T19:35:47.469822208Z" level=info msg="CreateContainer within sandbox \"746d4366125475c28609d7d70d4060faad06f5ea5bf46d18caaff6bc18bc4a26\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae\"" Oct 2 19:35:47.470707 env[1632]: time="2023-10-02T19:35:47.470674399Z" level=info msg="StartContainer for \"4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae\"" Oct 2 19:35:47.504994 systemd[1]: Started cri-containerd-4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae.scope. Oct 2 19:35:47.528925 systemd[1]: cri-containerd-4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae.scope: Deactivated successfully. Oct 2 19:35:47.747294 env[1632]: time="2023-10-02T19:35:47.747114037Z" level=info msg="shim disconnected" id=4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae Oct 2 19:35:47.747294 env[1632]: time="2023-10-02T19:35:47.747288947Z" level=warning msg="cleaning up after shim disconnected" id=4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae namespace=k8s.io Oct 2 19:35:47.747294 env[1632]: time="2023-10-02T19:35:47.747301689Z" level=info msg="cleaning up dead shim" Oct 2 19:35:47.756830 env[1632]: time="2023-10-02T19:35:47.756768613Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:35:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3063 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:35:47Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:35:47.757217 env[1632]: time="2023-10-02T19:35:47.757153153Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:35:47.758329 env[1632]: time="2023-10-02T19:35:47.758268007Z" level=error msg="Failed to pipe stdout of container \"4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae\"" error="reading from a closed fifo" Oct 2 19:35:47.758518 env[1632]: time="2023-10-02T19:35:47.758464406Z" level=error msg="Failed to pipe stderr of container \"4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae\"" error="reading from a closed fifo" Oct 2 19:35:47.760212 env[1632]: time="2023-10-02T19:35:47.760172193Z" level=error msg="StartContainer for \"4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:35:47.760469 kubelet[2091]: E1002 19:35:47.760447 2091 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae" Oct 2 19:35:47.760679 kubelet[2091]: E1002 19:35:47.760647 2091 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:35:47.760679 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:35:47.760679 kubelet[2091]: rm /hostbin/cilium-mount Oct 2 19:35:47.760679 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4z4n6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-dl6c9_kube-system(10a14911-e37c-41e2-896b-7bc9e69a93b3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:35:47.760902 kubelet[2091]: E1002 19:35:47.760708 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dl6c9" podUID="10a14911-e37c-41e2-896b-7bc9e69a93b3" Oct 2 19:35:47.894815 kubelet[2091]: I1002 19:35:47.893920 2091 scope.go:117] "RemoveContainer" containerID="43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0" Oct 2 19:35:47.894815 kubelet[2091]: I1002 19:35:47.894791 2091 scope.go:117] "RemoveContainer" containerID="43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0" Oct 2 19:35:47.896914 env[1632]: time="2023-10-02T19:35:47.896875204Z" level=info msg="RemoveContainer for \"43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0\"" Oct 2 19:35:47.898770 env[1632]: time="2023-10-02T19:35:47.898724574Z" level=info msg="RemoveContainer for \"43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0\"" Oct 2 19:35:47.899335 env[1632]: time="2023-10-02T19:35:47.899296427Z" level=error msg="RemoveContainer for \"43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0\" failed" error="failed to set removing state for container \"43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0\": container is already in removing state" Oct 2 19:35:47.899797 kubelet[2091]: E1002 19:35:47.899541 2091 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0\": container is already in removing state" containerID="43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0" Oct 2 19:35:47.899797 kubelet[2091]: E1002 19:35:47.899574 2091 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0": container is already in removing state; Skipping pod "cilium-dl6c9_kube-system(10a14911-e37c-41e2-896b-7bc9e69a93b3)" Oct 2 19:35:47.900236 kubelet[2091]: E1002 19:35:47.900207 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-dl6c9_kube-system(10a14911-e37c-41e2-896b-7bc9e69a93b3)\"" pod="kube-system/cilium-dl6c9" podUID="10a14911-e37c-41e2-896b-7bc9e69a93b3" Oct 2 19:35:47.911187 env[1632]: time="2023-10-02T19:35:47.911139596Z" level=info msg="RemoveContainer for \"43c9445ba6aa67f821e8cbc38976881dc29ed273b1c4c645ee3a2e0ce6e6aac0\" returns successfully" Oct 2 19:35:48.400924 kubelet[2091]: E1002 19:35:48.400876 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:48.406779 kubelet[2091]: E1002 19:35:48.406738 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:48.447704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae-rootfs.mount: Deactivated successfully. Oct 2 19:35:49.402019 kubelet[2091]: E1002 19:35:49.401961 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:50.402638 kubelet[2091]: E1002 19:35:50.402590 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:50.852556 kubelet[2091]: W1002 19:35:50.852507 2091 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10a14911_e37c_41e2_896b_7bc9e69a93b3.slice/cri-containerd-4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae.scope WatchSource:0}: task 4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae not found: not found Oct 2 19:35:51.403747 kubelet[2091]: E1002 19:35:51.403698 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:52.403886 kubelet[2091]: E1002 19:35:52.403846 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:53.241411 kubelet[2091]: E1002 19:35:53.241363 2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:53.404046 kubelet[2091]: E1002 19:35:53.404016 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:53.407929 kubelet[2091]: E1002 19:35:53.407901 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:54.404975 kubelet[2091]: E1002 19:35:54.404924 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:55.405930 kubelet[2091]: E1002 19:35:55.405879 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:56.407088 kubelet[2091]: E1002 19:35:56.407032 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:57.407548 kubelet[2091]: E1002 19:35:57.407497 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:58.407738 kubelet[2091]: E1002 19:35:58.407682 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:58.409405 kubelet[2091]: E1002 19:35:58.409379 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:59.408621 kubelet[2091]: E1002 19:35:59.408580 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:00.409441 kubelet[2091]: E1002 19:36:00.409394 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:01.410090 kubelet[2091]: E1002 19:36:01.410034 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:01.425360 kubelet[2091]: E1002 19:36:01.425323 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-dl6c9_kube-system(10a14911-e37c-41e2-896b-7bc9e69a93b3)\"" pod="kube-system/cilium-dl6c9" podUID="10a14911-e37c-41e2-896b-7bc9e69a93b3" Oct 2 19:36:02.410304 kubelet[2091]: E1002 19:36:02.410254 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:03.410389 kubelet[2091]: E1002 19:36:03.410356 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:36:03.410811 kubelet[2091]: E1002 19:36:03.410407 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:04.411294 kubelet[2091]: E1002 19:36:04.411248 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:05.412257 kubelet[2091]: E1002 19:36:05.412203 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:06.412706 kubelet[2091]: E1002 19:36:06.412649 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:07.413788 kubelet[2091]: E1002 19:36:07.413736 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:08.411460 kubelet[2091]: E1002 19:36:08.411427 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:36:08.414673 kubelet[2091]: E1002 19:36:08.414626 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:09.415338 kubelet[2091]: E1002 19:36:09.415292 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:10.416022 kubelet[2091]: E1002 19:36:10.415969 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:11.416979 kubelet[2091]: E1002 19:36:11.416925 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:12.417367 kubelet[2091]: E1002 19:36:12.417314 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:13.241312 kubelet[2091]: E1002 19:36:13.241271 2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:13.263281 env[1632]: time="2023-10-02T19:36:13.263241740Z" level=info msg="StopPodSandbox for \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\"" Oct 2 19:36:13.264785 env[1632]: time="2023-10-02T19:36:13.263340271Z" level=info msg="TearDown network for sandbox \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\" successfully" Oct 2 19:36:13.264785 env[1632]: time="2023-10-02T19:36:13.263388807Z" level=info msg="StopPodSandbox for \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\" returns successfully" Oct 2 19:36:13.266855 env[1632]: time="2023-10-02T19:36:13.266774120Z" level=info msg="RemovePodSandbox for \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\"" Oct 2 19:36:13.266965 env[1632]: time="2023-10-02T19:36:13.266860437Z" level=info msg="Forcibly stopping sandbox \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\"" Oct 2 19:36:13.267016 env[1632]: time="2023-10-02T19:36:13.266956924Z" level=info msg="TearDown network for sandbox \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\" successfully" Oct 2 19:36:13.273720 env[1632]: time="2023-10-02T19:36:13.273607409Z" level=info msg="RemovePodSandbox \"090a073ede2c6e7a99b4934a49a6b7262ce5604feef22741e7119118d0a36e68\" returns successfully" Oct 2 19:36:13.413022 kubelet[2091]: E1002 19:36:13.412988 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:36:13.418234 kubelet[2091]: E1002 19:36:13.418201 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:13.427873 env[1632]: time="2023-10-02T19:36:13.427826299Z" level=info msg="CreateContainer within sandbox \"746d4366125475c28609d7d70d4060faad06f5ea5bf46d18caaff6bc18bc4a26\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:36:13.452143 env[1632]: time="2023-10-02T19:36:13.452065570Z" level=info msg="CreateContainer within sandbox \"746d4366125475c28609d7d70d4060faad06f5ea5bf46d18caaff6bc18bc4a26\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"fe1e508db58e0ff7a1986d7fc790469dfd84ad122f7dc264662889aff202f436\"" Oct 2 19:36:13.453436 env[1632]: time="2023-10-02T19:36:13.453385800Z" level=info msg="StartContainer for \"fe1e508db58e0ff7a1986d7fc790469dfd84ad122f7dc264662889aff202f436\"" Oct 2 19:36:13.491413 systemd[1]: Started cri-containerd-fe1e508db58e0ff7a1986d7fc790469dfd84ad122f7dc264662889aff202f436.scope. Oct 2 19:36:13.525221 systemd[1]: cri-containerd-fe1e508db58e0ff7a1986d7fc790469dfd84ad122f7dc264662889aff202f436.scope: Deactivated successfully. Oct 2 19:36:13.546472 env[1632]: time="2023-10-02T19:36:13.546412368Z" level=info msg="shim disconnected" id=fe1e508db58e0ff7a1986d7fc790469dfd84ad122f7dc264662889aff202f436 Oct 2 19:36:13.546714 env[1632]: time="2023-10-02T19:36:13.546476091Z" level=warning msg="cleaning up after shim disconnected" id=fe1e508db58e0ff7a1986d7fc790469dfd84ad122f7dc264662889aff202f436 namespace=k8s.io Oct 2 19:36:13.546714 env[1632]: time="2023-10-02T19:36:13.546487875Z" level=info msg="cleaning up dead shim" Oct 2 19:36:13.560431 env[1632]: time="2023-10-02T19:36:13.560379421Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:36:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3103 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:36:13Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fe1e508db58e0ff7a1986d7fc790469dfd84ad122f7dc264662889aff202f436/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:36:13.560702 env[1632]: time="2023-10-02T19:36:13.560645574Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:36:13.560922 env[1632]: time="2023-10-02T19:36:13.560886390Z" level=error msg="Failed to pipe stdout of container \"fe1e508db58e0ff7a1986d7fc790469dfd84ad122f7dc264662889aff202f436\"" error="reading from a closed fifo" Oct 2 19:36:13.560999 env[1632]: time="2023-10-02T19:36:13.560968663Z" level=error msg="Failed to pipe stderr of container \"fe1e508db58e0ff7a1986d7fc790469dfd84ad122f7dc264662889aff202f436\"" error="reading from a closed fifo" Oct 2 19:36:13.563059 env[1632]: time="2023-10-02T19:36:13.563013108Z" level=error msg="StartContainer for \"fe1e508db58e0ff7a1986d7fc790469dfd84ad122f7dc264662889aff202f436\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:36:13.563361 kubelet[2091]: E1002 19:36:13.563333 2091 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="fe1e508db58e0ff7a1986d7fc790469dfd84ad122f7dc264662889aff202f436" Oct 2 19:36:13.563850 kubelet[2091]: E1002 19:36:13.563814 2091 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:36:13.563850 kubelet[2091]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:36:13.563850 kubelet[2091]: rm /hostbin/cilium-mount Oct 2 19:36:13.563850 kubelet[2091]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4z4n6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-dl6c9_kube-system(10a14911-e37c-41e2-896b-7bc9e69a93b3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:36:13.564090 kubelet[2091]: E1002 19:36:13.563877 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dl6c9" podUID="10a14911-e37c-41e2-896b-7bc9e69a93b3" Oct 2 19:36:13.951386 kubelet[2091]: I1002 19:36:13.951249 2091 scope.go:117] "RemoveContainer" containerID="4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae" Oct 2 19:36:13.952403 kubelet[2091]: I1002 19:36:13.951782 2091 scope.go:117] "RemoveContainer" containerID="4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae" Oct 2 19:36:13.957161 env[1632]: time="2023-10-02T19:36:13.956931572Z" level=info msg="RemoveContainer for \"4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae\"" Oct 2 19:36:13.957704 env[1632]: time="2023-10-02T19:36:13.957675028Z" level=info msg="RemoveContainer for \"4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae\"" Oct 2 19:36:13.957903 env[1632]: time="2023-10-02T19:36:13.957861861Z" level=error msg="RemoveContainer for \"4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae\" failed" error="failed to set removing state for container \"4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae\": container is already in removing state" Oct 2 19:36:13.958118 kubelet[2091]: E1002 19:36:13.958082 2091 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae\": container is already in removing state" containerID="4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae" Oct 2 19:36:13.958224 kubelet[2091]: E1002 19:36:13.958119 2091 kuberuntime_container.go:820] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae": container is already in removing state; Skipping pod "cilium-dl6c9_kube-system(10a14911-e37c-41e2-896b-7bc9e69a93b3)" Oct 2 19:36:13.958499 kubelet[2091]: E1002 19:36:13.958480 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-dl6c9_kube-system(10a14911-e37c-41e2-896b-7bc9e69a93b3)\"" pod="kube-system/cilium-dl6c9" podUID="10a14911-e37c-41e2-896b-7bc9e69a93b3" Oct 2 19:36:13.960670 env[1632]: time="2023-10-02T19:36:13.960639058Z" level=info msg="RemoveContainer for \"4eb2079326c72f395894a9851cbe3fc71ce3b387467ac096b3247508bc6131ae\" returns successfully" Oct 2 19:36:14.419239 kubelet[2091]: E1002 19:36:14.419055 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:14.439267 systemd[1]: run-containerd-runc-k8s.io-fe1e508db58e0ff7a1986d7fc790469dfd84ad122f7dc264662889aff202f436-runc.K0NGBr.mount: Deactivated successfully. Oct 2 19:36:14.439389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe1e508db58e0ff7a1986d7fc790469dfd84ad122f7dc264662889aff202f436-rootfs.mount: Deactivated successfully. Oct 2 19:36:15.420191 kubelet[2091]: E1002 19:36:15.420146 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:16.420856 kubelet[2091]: E1002 19:36:16.420802 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:16.651807 kubelet[2091]: W1002 19:36:16.651763 2091 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10a14911_e37c_41e2_896b_7bc9e69a93b3.slice/cri-containerd-fe1e508db58e0ff7a1986d7fc790469dfd84ad122f7dc264662889aff202f436.scope WatchSource:0}: task fe1e508db58e0ff7a1986d7fc790469dfd84ad122f7dc264662889aff202f436 not found: not found Oct 2 19:36:17.421165 kubelet[2091]: E1002 19:36:17.421090 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:18.413822 kubelet[2091]: E1002 19:36:18.413784 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:36:18.422025 kubelet[2091]: E1002 19:36:18.421989 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:19.422781 kubelet[2091]: E1002 19:36:19.422731 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:20.423779 kubelet[2091]: E1002 19:36:20.423740 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:21.425228 kubelet[2091]: E1002 19:36:21.425189 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:22.425923 kubelet[2091]: E1002 19:36:22.425875 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:23.415073 kubelet[2091]: E1002 19:36:23.414917 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:36:23.426266 kubelet[2091]: E1002 19:36:23.426234 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:24.427820 kubelet[2091]: E1002 19:36:24.427762 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:25.427864 kubelet[2091]: E1002 19:36:25.427837 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:26.424841 kubelet[2091]: E1002 19:36:26.424799 2091 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-dl6c9_kube-system(10a14911-e37c-41e2-896b-7bc9e69a93b3)\"" pod="kube-system/cilium-dl6c9" podUID="10a14911-e37c-41e2-896b-7bc9e69a93b3" Oct 2 19:36:26.428734 kubelet[2091]: E1002 19:36:26.428706 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:27.429439 kubelet[2091]: E1002 19:36:27.429407 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:28.415959 kubelet[2091]: E1002 19:36:28.415925 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:36:28.430349 kubelet[2091]: E1002 19:36:28.430298 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:29.431372 kubelet[2091]: E1002 19:36:29.431340 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:30.432440 kubelet[2091]: E1002 19:36:30.432390 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:31.433024 kubelet[2091]: E1002 19:36:31.432968 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:32.433577 kubelet[2091]: E1002 19:36:32.433478 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:32.875233 env[1632]: time="2023-10-02T19:36:32.875184350Z" level=info msg="StopPodSandbox for \"746d4366125475c28609d7d70d4060faad06f5ea5bf46d18caaff6bc18bc4a26\"" Oct 2 19:36:32.875691 env[1632]: time="2023-10-02T19:36:32.875275072Z" level=info msg="Container to stop \"fe1e508db58e0ff7a1986d7fc790469dfd84ad122f7dc264662889aff202f436\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:36:32.877572 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-746d4366125475c28609d7d70d4060faad06f5ea5bf46d18caaff6bc18bc4a26-shm.mount: Deactivated successfully. Oct 2 19:36:32.890216 systemd[1]: cri-containerd-746d4366125475c28609d7d70d4060faad06f5ea5bf46d18caaff6bc18bc4a26.scope: Deactivated successfully. Oct 2 19:36:32.890000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:36:32.892705 kernel: kauditd_printk_skb: 50 callbacks suppressed Oct 2 19:36:32.892805 kernel: audit: type=1334 audit(1696275392.890:774): prog-id=88 op=UNLOAD Oct 2 19:36:32.896000 audit: BPF prog-id=91 op=UNLOAD Oct 2 19:36:32.899169 kernel: audit: type=1334 audit(1696275392.896:775): prog-id=91 op=UNLOAD Oct 2 19:36:32.902496 env[1632]: time="2023-10-02T19:36:32.902451474Z" level=info msg="StopContainer for \"c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7\" with timeout 30 (s)" Oct 2 19:36:32.903005 env[1632]: time="2023-10-02T19:36:32.902902457Z" level=info msg="Stop container \"c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7\" with signal terminated" Oct 2 19:36:32.926198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-746d4366125475c28609d7d70d4060faad06f5ea5bf46d18caaff6bc18bc4a26-rootfs.mount: Deactivated successfully. Oct 2 19:36:32.935494 systemd[1]: cri-containerd-c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7.scope: Deactivated successfully. Oct 2 19:36:32.935000 audit: BPF prog-id=92 op=UNLOAD Oct 2 19:36:32.938153 kernel: audit: type=1334 audit(1696275392.935:776): prog-id=92 op=UNLOAD Oct 2 19:36:32.944000 audit: BPF prog-id=95 op=UNLOAD Oct 2 19:36:32.947216 kernel: audit: type=1334 audit(1696275392.944:777): prog-id=95 op=UNLOAD Oct 2 19:36:32.950898 env[1632]: time="2023-10-02T19:36:32.950852756Z" level=info msg="shim disconnected" id=746d4366125475c28609d7d70d4060faad06f5ea5bf46d18caaff6bc18bc4a26 Oct 2 19:36:32.951227 env[1632]: time="2023-10-02T19:36:32.951179758Z" level=warning msg="cleaning up after shim disconnected" id=746d4366125475c28609d7d70d4060faad06f5ea5bf46d18caaff6bc18bc4a26 namespace=k8s.io Oct 2 19:36:32.951227 env[1632]: time="2023-10-02T19:36:32.951205861Z" level=info msg="cleaning up dead shim" Oct 2 19:36:32.965770 env[1632]: time="2023-10-02T19:36:32.965671763Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:36:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3150 runtime=io.containerd.runc.v2\n" Oct 2 19:36:32.966149 env[1632]: time="2023-10-02T19:36:32.966095665Z" level=info msg="TearDown network for sandbox \"746d4366125475c28609d7d70d4060faad06f5ea5bf46d18caaff6bc18bc4a26\" successfully" Oct 2 19:36:32.966249 env[1632]: time="2023-10-02T19:36:32.966150556Z" level=info msg="StopPodSandbox for \"746d4366125475c28609d7d70d4060faad06f5ea5bf46d18caaff6bc18bc4a26\" returns successfully" Oct 2 19:36:32.979956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7-rootfs.mount: Deactivated successfully. Oct 2 19:36:32.991683 env[1632]: time="2023-10-02T19:36:32.991628508Z" level=info msg="shim disconnected" id=c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7 Oct 2 19:36:32.991683 env[1632]: time="2023-10-02T19:36:32.991676571Z" level=warning msg="cleaning up after shim disconnected" id=c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7 namespace=k8s.io Oct 2 19:36:32.991683 env[1632]: time="2023-10-02T19:36:32.991689404Z" level=info msg="cleaning up dead shim" Oct 2 19:36:32.993619 kubelet[2091]: I1002 19:36:32.993175 2091 scope.go:117] "RemoveContainer" containerID="fe1e508db58e0ff7a1986d7fc790469dfd84ad122f7dc264662889aff202f436" Oct 2 19:36:32.995995 env[1632]: time="2023-10-02T19:36:32.995957560Z" level=info msg="RemoveContainer for \"fe1e508db58e0ff7a1986d7fc790469dfd84ad122f7dc264662889aff202f436\"" Oct 2 19:36:33.004805 env[1632]: time="2023-10-02T19:36:33.004705691Z" level=info msg="RemoveContainer for \"fe1e508db58e0ff7a1986d7fc790469dfd84ad122f7dc264662889aff202f436\" returns successfully" Oct 2 19:36:33.011421 env[1632]: time="2023-10-02T19:36:33.011368647Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:36:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3170 runtime=io.containerd.runc.v2\n" Oct 2 19:36:33.017679 env[1632]: time="2023-10-02T19:36:33.017631199Z" level=info msg="StopContainer for \"c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7\" returns successfully" Oct 2 19:36:33.018345 env[1632]: time="2023-10-02T19:36:33.018294167Z" level=info msg="StopPodSandbox for \"844ca5750527052201f27faadc22a017e21edee704fd635b4c03e6a05d773f0f\"" Oct 2 19:36:33.018477 env[1632]: time="2023-10-02T19:36:33.018361646Z" level=info msg="Container to stop \"c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:36:33.021732 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-844ca5750527052201f27faadc22a017e21edee704fd635b4c03e6a05d773f0f-shm.mount: Deactivated successfully. Oct 2 19:36:33.032206 systemd[1]: cri-containerd-844ca5750527052201f27faadc22a017e21edee704fd635b4c03e6a05d773f0f.scope: Deactivated successfully. Oct 2 19:36:33.032000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:36:33.035306 kernel: audit: type=1334 audit(1696275393.032:778): prog-id=84 op=UNLOAD Oct 2 19:36:33.036000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:36:33.039421 kernel: audit: type=1334 audit(1696275393.036:779): prog-id=87 op=UNLOAD Oct 2 19:36:33.076238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-844ca5750527052201f27faadc22a017e21edee704fd635b4c03e6a05d773f0f-rootfs.mount: Deactivated successfully. Oct 2 19:36:33.086085 kubelet[2091]: I1002 19:36:33.085987 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-cni-path\") pod \"10a14911-e37c-41e2-896b-7bc9e69a93b3\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " Oct 2 19:36:33.086085 kubelet[2091]: I1002 19:36:33.086031 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-cni-path" (OuterVolumeSpecName: "cni-path") pod "10a14911-e37c-41e2-896b-7bc9e69a93b3" (UID: "10a14911-e37c-41e2-896b-7bc9e69a93b3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:36:33.086085 kubelet[2091]: I1002 19:36:33.086057 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z4n6\" (UniqueName: \"kubernetes.io/projected/10a14911-e37c-41e2-896b-7bc9e69a93b3-kube-api-access-4z4n6\") pod \"10a14911-e37c-41e2-896b-7bc9e69a93b3\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " Oct 2 19:36:33.086085 kubelet[2091]: I1002 19:36:33.086091 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-etc-cni-netd\") pod \"10a14911-e37c-41e2-896b-7bc9e69a93b3\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " Oct 2 19:36:33.087158 kubelet[2091]: I1002 19:36:33.086232 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-cilium-cgroup\") pod \"10a14911-e37c-41e2-896b-7bc9e69a93b3\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " Oct 2 19:36:33.087158 kubelet[2091]: I1002 19:36:33.086278 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10a14911-e37c-41e2-896b-7bc9e69a93b3-clustermesh-secrets\") pod \"10a14911-e37c-41e2-896b-7bc9e69a93b3\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " Oct 2 19:36:33.087158 kubelet[2091]: I1002 19:36:33.086304 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-hostproc\") pod \"10a14911-e37c-41e2-896b-7bc9e69a93b3\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " Oct 2 19:36:33.087158 kubelet[2091]: I1002 19:36:33.086329 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-cilium-run\") pod \"10a14911-e37c-41e2-896b-7bc9e69a93b3\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " Oct 2 19:36:33.087158 kubelet[2091]: I1002 19:36:33.086358 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/10a14911-e37c-41e2-896b-7bc9e69a93b3-cilium-ipsec-secrets\") pod \"10a14911-e37c-41e2-896b-7bc9e69a93b3\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " Oct 2 19:36:33.087158 kubelet[2091]: I1002 19:36:33.086398 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-xtables-lock\") pod \"10a14911-e37c-41e2-896b-7bc9e69a93b3\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " Oct 2 19:36:33.087158 kubelet[2091]: I1002 19:36:33.086429 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10a14911-e37c-41e2-896b-7bc9e69a93b3-cilium-config-path\") pod \"10a14911-e37c-41e2-896b-7bc9e69a93b3\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " Oct 2 19:36:33.087158 kubelet[2091]: I1002 19:36:33.086458 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10a14911-e37c-41e2-896b-7bc9e69a93b3-hubble-tls\") pod \"10a14911-e37c-41e2-896b-7bc9e69a93b3\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " Oct 2 19:36:33.087158 kubelet[2091]: I1002 19:36:33.086496 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-host-proc-sys-net\") pod \"10a14911-e37c-41e2-896b-7bc9e69a93b3\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " Oct 2 19:36:33.087158 kubelet[2091]: I1002 19:36:33.086526 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-host-proc-sys-kernel\") pod \"10a14911-e37c-41e2-896b-7bc9e69a93b3\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " Oct 2 19:36:33.087158 kubelet[2091]: I1002 19:36:33.086564 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-lib-modules\") pod \"10a14911-e37c-41e2-896b-7bc9e69a93b3\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " Oct 2 19:36:33.087158 kubelet[2091]: I1002 19:36:33.086590 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-bpf-maps\") pod \"10a14911-e37c-41e2-896b-7bc9e69a93b3\" (UID: \"10a14911-e37c-41e2-896b-7bc9e69a93b3\") " Oct 2 19:36:33.087158 kubelet[2091]: I1002 19:36:33.086646 2091 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-cni-path\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:36:33.087158 kubelet[2091]: I1002 19:36:33.086682 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "10a14911-e37c-41e2-896b-7bc9e69a93b3" (UID: "10a14911-e37c-41e2-896b-7bc9e69a93b3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:36:33.087158 kubelet[2091]: I1002 19:36:33.086721 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "10a14911-e37c-41e2-896b-7bc9e69a93b3" (UID: "10a14911-e37c-41e2-896b-7bc9e69a93b3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:36:33.091691 kubelet[2091]: I1002 19:36:33.090004 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "10a14911-e37c-41e2-896b-7bc9e69a93b3" (UID: "10a14911-e37c-41e2-896b-7bc9e69a93b3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:36:33.091691 kubelet[2091]: I1002 19:36:33.090073 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "10a14911-e37c-41e2-896b-7bc9e69a93b3" (UID: "10a14911-e37c-41e2-896b-7bc9e69a93b3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:36:33.091691 kubelet[2091]: I1002 19:36:33.090481 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "10a14911-e37c-41e2-896b-7bc9e69a93b3" (UID: "10a14911-e37c-41e2-896b-7bc9e69a93b3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:36:33.091691 kubelet[2091]: I1002 19:36:33.090525 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "10a14911-e37c-41e2-896b-7bc9e69a93b3" (UID: "10a14911-e37c-41e2-896b-7bc9e69a93b3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:36:33.091691 kubelet[2091]: I1002 19:36:33.090558 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "10a14911-e37c-41e2-896b-7bc9e69a93b3" (UID: "10a14911-e37c-41e2-896b-7bc9e69a93b3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:36:33.091691 kubelet[2091]: I1002 19:36:33.090570 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-hostproc" (OuterVolumeSpecName: "hostproc") pod "10a14911-e37c-41e2-896b-7bc9e69a93b3" (UID: "10a14911-e37c-41e2-896b-7bc9e69a93b3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:36:33.091691 kubelet[2091]: I1002 19:36:33.090584 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "10a14911-e37c-41e2-896b-7bc9e69a93b3" (UID: "10a14911-e37c-41e2-896b-7bc9e69a93b3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:36:33.091691 kubelet[2091]: I1002 19:36:33.091315 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10a14911-e37c-41e2-896b-7bc9e69a93b3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "10a14911-e37c-41e2-896b-7bc9e69a93b3" (UID: "10a14911-e37c-41e2-896b-7bc9e69a93b3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:36:33.095635 kubelet[2091]: I1002 19:36:33.095556 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a14911-e37c-41e2-896b-7bc9e69a93b3-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "10a14911-e37c-41e2-896b-7bc9e69a93b3" (UID: "10a14911-e37c-41e2-896b-7bc9e69a93b3"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:36:33.096361 env[1632]: time="2023-10-02T19:36:33.096164981Z" level=info msg="shim disconnected" id=844ca5750527052201f27faadc22a017e21edee704fd635b4c03e6a05d773f0f Oct 2 19:36:33.096556 env[1632]: time="2023-10-02T19:36:33.096533732Z" level=warning msg="cleaning up after shim disconnected" id=844ca5750527052201f27faadc22a017e21edee704fd635b4c03e6a05d773f0f namespace=k8s.io Oct 2 19:36:33.097228 env[1632]: time="2023-10-02T19:36:33.097202731Z" level=info msg="cleaning up dead shim" Oct 2 19:36:33.102142 kubelet[2091]: I1002 19:36:33.102082 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a14911-e37c-41e2-896b-7bc9e69a93b3-kube-api-access-4z4n6" (OuterVolumeSpecName: "kube-api-access-4z4n6") pod "10a14911-e37c-41e2-896b-7bc9e69a93b3" (UID: "10a14911-e37c-41e2-896b-7bc9e69a93b3"). InnerVolumeSpecName "kube-api-access-4z4n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:36:33.103141 kubelet[2091]: I1002 19:36:33.103094 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a14911-e37c-41e2-896b-7bc9e69a93b3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "10a14911-e37c-41e2-896b-7bc9e69a93b3" (UID: "10a14911-e37c-41e2-896b-7bc9e69a93b3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:36:33.107652 kubelet[2091]: I1002 19:36:33.107606 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a14911-e37c-41e2-896b-7bc9e69a93b3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "10a14911-e37c-41e2-896b-7bc9e69a93b3" (UID: "10a14911-e37c-41e2-896b-7bc9e69a93b3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:36:33.110567 env[1632]: time="2023-10-02T19:36:33.110514573Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:36:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3208 runtime=io.containerd.runc.v2\n" Oct 2 19:36:33.110889 env[1632]: time="2023-10-02T19:36:33.110853207Z" level=info msg="TearDown network for sandbox \"844ca5750527052201f27faadc22a017e21edee704fd635b4c03e6a05d773f0f\" successfully" Oct 2 19:36:33.111037 env[1632]: time="2023-10-02T19:36:33.110886316Z" level=info msg="StopPodSandbox for \"844ca5750527052201f27faadc22a017e21edee704fd635b4c03e6a05d773f0f\" returns successfully" Oct 2 19:36:33.187534 kubelet[2091]: I1002 19:36:33.187280 2091 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-hostproc\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:36:33.187534 kubelet[2091]: I1002 19:36:33.187319 2091 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-cilium-run\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:36:33.187534 kubelet[2091]: I1002 19:36:33.187337 2091 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4z4n6\" (UniqueName: \"kubernetes.io/projected/10a14911-e37c-41e2-896b-7bc9e69a93b3-kube-api-access-4z4n6\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:36:33.187534 kubelet[2091]: I1002 19:36:33.187355 2091 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-etc-cni-netd\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:36:33.187534 kubelet[2091]: I1002 19:36:33.187369 2091 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-cilium-cgroup\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:36:33.187534 kubelet[2091]: I1002 19:36:33.187383 2091 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/10a14911-e37c-41e2-896b-7bc9e69a93b3-clustermesh-secrets\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:36:33.187534 kubelet[2091]: I1002 19:36:33.187396 2091 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/10a14911-e37c-41e2-896b-7bc9e69a93b3-hubble-tls\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:36:33.187534 kubelet[2091]: I1002 19:36:33.187410 2091 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/10a14911-e37c-41e2-896b-7bc9e69a93b3-cilium-ipsec-secrets\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:36:33.187534 kubelet[2091]: I1002 19:36:33.187423 2091 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-xtables-lock\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:36:33.187534 kubelet[2091]: I1002 19:36:33.187436 2091 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10a14911-e37c-41e2-896b-7bc9e69a93b3-cilium-config-path\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:36:33.187534 kubelet[2091]: I1002 19:36:33.187450 2091 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-host-proc-sys-net\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:36:33.187534 kubelet[2091]: I1002 19:36:33.187464 2091 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-host-proc-sys-kernel\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:36:33.187534 kubelet[2091]: I1002 19:36:33.187477 2091 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-lib-modules\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:36:33.187534 kubelet[2091]: I1002 19:36:33.187490 2091 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/10a14911-e37c-41e2-896b-7bc9e69a93b3-bpf-maps\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:36:33.241868 kubelet[2091]: E1002 19:36:33.241819 2091 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:33.288340 kubelet[2091]: I1002 19:36:33.288240 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/339af218-a779-4826-9238-5140e0a6ecd3-cilium-config-path\") pod \"339af218-a779-4826-9238-5140e0a6ecd3\" (UID: \"339af218-a779-4826-9238-5140e0a6ecd3\") " Oct 2 19:36:33.289174 kubelet[2091]: I1002 19:36:33.289123 2091 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlln4\" (UniqueName: \"kubernetes.io/projected/339af218-a779-4826-9238-5140e0a6ecd3-kube-api-access-tlln4\") pod \"339af218-a779-4826-9238-5140e0a6ecd3\" (UID: \"339af218-a779-4826-9238-5140e0a6ecd3\") " Oct 2 19:36:33.294600 kubelet[2091]: I1002 19:36:33.294555 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/339af218-a779-4826-9238-5140e0a6ecd3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "339af218-a779-4826-9238-5140e0a6ecd3" (UID: "339af218-a779-4826-9238-5140e0a6ecd3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:36:33.297011 kubelet[2091]: I1002 19:36:33.296967 2091 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/339af218-a779-4826-9238-5140e0a6ecd3-kube-api-access-tlln4" (OuterVolumeSpecName: "kube-api-access-tlln4") pod "339af218-a779-4826-9238-5140e0a6ecd3" (UID: "339af218-a779-4826-9238-5140e0a6ecd3"). InnerVolumeSpecName "kube-api-access-tlln4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:36:33.299277 systemd[1]: Removed slice kubepods-burstable-pod10a14911_e37c_41e2_896b_7bc9e69a93b3.slice. Oct 2 19:36:33.391312 kubelet[2091]: I1002 19:36:33.391270 2091 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/339af218-a779-4826-9238-5140e0a6ecd3-cilium-config-path\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:36:33.391312 kubelet[2091]: I1002 19:36:33.391318 2091 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tlln4\" (UniqueName: \"kubernetes.io/projected/339af218-a779-4826-9238-5140e0a6ecd3-kube-api-access-tlln4\") on node \"172.31.22.219\" DevicePath \"\"" Oct 2 19:36:33.417046 kubelet[2091]: E1002 19:36:33.417009 2091 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:36:33.425960 kubelet[2091]: I1002 19:36:33.425917 2091 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="10a14911-e37c-41e2-896b-7bc9e69a93b3" path="/var/lib/kubelet/pods/10a14911-e37c-41e2-896b-7bc9e69a93b3/volumes" Oct 2 19:36:33.430012 systemd[1]: Removed slice kubepods-besteffort-pod339af218_a779_4826_9238_5140e0a6ecd3.slice. Oct 2 19:36:33.433845 kubelet[2091]: E1002 19:36:33.433808 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:33.876916 systemd[1]: var-lib-kubelet-pods-10a14911\x2de37c\x2d41e2\x2d896b\x2d7bc9e69a93b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4z4n6.mount: Deactivated successfully. Oct 2 19:36:33.877081 systemd[1]: var-lib-kubelet-pods-339af218\x2da779\x2d4826\x2d9238\x2d5140e0a6ecd3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtlln4.mount: Deactivated successfully. Oct 2 19:36:33.877178 systemd[1]: var-lib-kubelet-pods-10a14911\x2de37c\x2d41e2\x2d896b\x2d7bc9e69a93b3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:36:33.877257 systemd[1]: var-lib-kubelet-pods-10a14911\x2de37c\x2d41e2\x2d896b\x2d7bc9e69a93b3-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:36:33.877345 systemd[1]: var-lib-kubelet-pods-10a14911\x2de37c\x2d41e2\x2d896b\x2d7bc9e69a93b3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:36:33.997155 kubelet[2091]: I1002 19:36:33.997112 2091 scope.go:117] "RemoveContainer" containerID="c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7" Oct 2 19:36:34.000934 env[1632]: time="2023-10-02T19:36:34.000874299Z" level=info msg="RemoveContainer for \"c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7\"" Oct 2 19:36:34.005321 env[1632]: time="2023-10-02T19:36:34.005275229Z" level=info msg="RemoveContainer for \"c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7\" returns successfully" Oct 2 19:36:34.005582 kubelet[2091]: I1002 19:36:34.005508 2091 scope.go:117] "RemoveContainer" containerID="c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7" Oct 2 19:36:34.006042 env[1632]: time="2023-10-02T19:36:34.005956248Z" level=error msg="ContainerStatus for \"c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7\": not found" Oct 2 19:36:34.006317 kubelet[2091]: E1002 19:36:34.006294 2091 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7\": not found" containerID="c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7" Oct 2 19:36:34.006449 kubelet[2091]: I1002 19:36:34.006432 2091 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7"} err="failed to get container status \"c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7\": rpc error: code = NotFound desc = an error occurred when try to find container \"c42e376716c4f361f766f9f2738a4446178533377f130a57d85e4c6ab6dcdca7\": not found" Oct 2 19:36:34.434652 kubelet[2091]: E1002 19:36:34.434604 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:36:35.426645 kubelet[2091]: I1002 19:36:35.426604 2091 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="339af218-a779-4826-9238-5140e0a6ecd3" path="/var/lib/kubelet/pods/339af218-a779-4826-9238-5140e0a6ecd3/volumes" Oct 2 19:36:35.435580 kubelet[2091]: E1002 19:36:35.435542 2091 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"