LLVM backend for Accelerate
This package compiles Accelerate code to LLVM IR, and executes that code on multicore CPUs as well as NVIDIA GPUs. This avoids the need to go through
clang. For details on Accelerate, refer to the main repository.
We love all kinds of contributions, so feel free to open issues for missing features as well as report (or fix!) bugs on the issue tracker.
Haskell dependencies are available from Hackage, but there are several external library dependencies that you will need to install as well:
libFFI(if using the
accelerate-llvm-nativebackend for multicore CPUs)
CUDA(if using the
accelerate-llvm-ptxbackend for NVIDIA GPUs)
A docker container is provided with this package preinstalled (via stack) at
/opt/accelerate-llvm. Note that if you wish to use the
accelerate-llvm-ptxGPU backend, you will need to install the NVIDIA docker plugin; see that page for more information.
$ docker run -it tmcdonell/accelerate-llvm
When installing LLVM, make sure that it includes the
libLLVMshared library. If you want to use the GPU targeting
accelerate-llvm-ptxbackend, make sure you install (or build) LLVM with the 'nvptx' target.
Example using Homebrew on macOS:
$ brew install llvm-hs/llvm/llvm-9
For Debian/Ubuntu based Linux distributions, the LLVM.org website provides binary distribution packages. Check apt.llvm.org for instructions for adding the correct package database for your OS version, and then:
$ apt-get install llvm-9-dev
If your OS does not have an appropriate LLVM distribution available, you can also build from source. Detailed build instructions are available on the LLVM.org website. Note that you will require at least CMake 3.4.3 and a recent C++ compiler; at least Clang 3.1, GCC 4.8, or Visual Studio 2015 (update 3).
Download and unpack the LLVM-9 source code. We'll refer to the path that the source tree was unpacked to as
LLVM_SRC. Only the main LLVM source tree is required, but you can optionally add other components such as the Clang compiler or Polly loop optimiser. See the LLVM releases page for the complete list.
Create a temporary build directory and
cdinto it, for example:
sh $ mkdir /tmp/build $ cd /tmp/build
Execute the following to configure the build. Here
INSTALL_PREFIXis where LLVM is to be installed, for example
sh $ cmake $LLVM_SRC -DCMAKE_INSTALL_PREFIX=$INSTALL_PREFIX -DCMAKE_BUILD_TYPE=Release -DLLVM_ENABLE_ASSERTIONS=ON -DLLVM_BUILD_LLVM_DYLIB=ON -DLLVM_LINK_LLVM_DYLIB=ONSee options and variables for a list of additional build parameters you can specify.
Build and install:
sh $ cmake --build . $ cmake --build . --target install
For macOS only, some additional steps are useful to work around issues related to System Integrity Protection:
sh cd $INSTALL_PREFIX/lib ln -s libLLVM.dylib libLLVM-9.dylib install_name_tool -id $PWD/libLTO.dylib libLTO.dylib install_name_tool -id $PWD/libLLVM.dylib libLLVM.dylib install_name_tool -change '@rpath/libLLVM.dylib' $PWD/libLLVM.dylib libLTO.dylib
Once the dependencies are installed, we are ready to install
sh $ ln -s stack-8.8.yaml stack.yaml $ stack setup $ stack install
accelerate-llvm-ptxbackend can optionally be compiled to generate GPU code using the
libNVVMlibrary, rather than LLVM's inbuilt NVPTX code generator.
libNVVMis a closed-source library distributed as part of the NVIDIA CUDA toolkit, and is what the
nvcccompiler itself uses internally when compiling CUDA C code.
libNVVMmay improve GPU performance compared to the code generator built in to LLVM. One difficulty with using it however is that since
libNVVMis also based on LLVM, and typically lags LLVM by several releases, you must install
accelerate-llvmwith a "compatible" version of LLVM, which will depend on the version of the CUDA toolkit you have installed. The following table shows combinations which have been tested:
| | LLVM-3.3 | LLVM-3.4 | LLVM-3.5 | LLVM-3.8 | LLVM-3.9 | LLVM-4.0 | LLVM-5.0 | LLVM-6.0 | LLVM-7 | LLVM-8 | LLVM-9 | | ------------- | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :------: | :----: | :----: | :----: | | CUDA-7.0 | ⭕ | ❌ | | | | | | | | | | | CUDA-7.5 | | ⭕ | ⭕ | ❌ | | | | | | | | | CUDA-8.0 | | | ⭕ | ⭕ | ❌ | ❌ | | | | | | | CUDA-9.0 | | | | | | ❌ | ❌ | | | | | | CUDA-9.1 | | | | | | | | | | | | | CUDA-9.2 | | | | | | | | | | | | | CUDA-10.0 | | | | | | | | | | | | | CUDA-10.1 | | | | | | | | | | | |
Where ⭕ = Works, and ❌ = Does not work.
The above table is incomplete! If you try a particular combination and find that it does or does not work, please let us know!
Note that the above restrictions on CUDA and LLVM version exist only if you want to use the NVVM component. Otherwise, you should be free to use any combination of CUDA and LLVM.
Also note that
accelerate-llvm-ptxitself currently requires at least LLVM-4.0.
stack, either edit the
stack.yamland add the following section:
flags: accelerate-llvm-ptx: nvvm: true
Or install using the following option on the command line:
$ stack install accelerate-llvm-ptx --flag accelerate-llvm-ptx:nvvm
If installing via
$ cabal install accelerate-llvm-ptx -fnvvm