RAPIDS Memory Manager
Achieving optimal performance in GPU-centric workflows frequently requires customizing how host and device memory are allocated. For example, using "pinned" host memory for asynchronous host <-> device memory transfers, or using a device memory pool sub-allocator to reduce the cost of dynamic device memory allocation.
The goal of the RAPIDS Memory Manager (RMM) is to provide: - A common interface that allows customizing device and host memory allocation - A collection of implementations of the interface - A collection of data structures that use the interface for memory allocation
For information on the interface RMM provides and how to use RMM in your C++ code, see below.
NOTE: For the latest stable README.md ensure you are on the
mainbranch.
RMM can be installed with Conda (miniconda, or the full Anaconda distribution) from the
rapidsaichannel:
# for CUDA 10.2 conda install -c nvidia -c rapidsai -c conda-forge -c defaults \ rmm cudatoolkit=10.2 # for CUDA 10.1 conda install -c nvidia -c rapidsai -c conda-forge -c defaults \ rmm cudatoolkit=10.1 # for CUDA 10.0 conda install -c nvidia -c rapidsai -c conda-forge -c defaults \ rmm cudatoolkit=10.0
We also provide nightly Conda packages built from the HEAD of our latest development branch.
Note: RMM is supported only on Linux, and with Python versions 3.7 and later.
Note: The RMM package from Conda requires building with GCC 7 or later. Otherwise, your application may fail to build.
See the Get RAPIDS version picker for more OS and version info.
Compiler requirements:
gccversion 7.0 or higher required
nvccversion 9.0 or higher recommended
cmakeversion 3.18 or higher
CUDA/GPU requirements:
You can obtain CUDA from https://developer.nvidia.com/cuda-downloads
To install RMM from source, ensure the dependencies are met and follow the steps below:
bash $ git clone --recurse-submodules https://github.com/rapidsai/rmm.git $ cd rmm
Follow the instructions under "Create the conda development environment
cudf_dev" in the cuDF README.
Create the conda development environment
cudf_dev```bash
cudfdirectory)
$ conda env create --name cudfdev --file conda/environments/devpy35.yml
$ source activate cudf_dev ```
Build and install
librmmusing cmake & make. CMake depends on the
nvccexecutable being on your path or defined in
CUDACXXenvironment variable.
$ mkdir build # make a build directory $ cd build # enter the build directory $ cmake .. -DCMAKE_INSTALL_PREFIX=/install/path # configure cmake ... use $CONDA_PREFIX if you're using Anaconda $ make -j # compile the library librmm.so ... '-j' will start a parallel job using the number of physical cores available on your system $ make install # install the library librmm.so to '/install/path'
librmmand
rmmusing build.sh. Build.sh creates build dir at root of git repository. build.sh depends on the
nvccexecutable being on your path or defined in
CUDACXXenvironment variable.
$ ./build.sh -h # Display help and exit $ ./build.sh -n librmm # Build librmm without installing $ ./build.sh -n rmm # Build rmm without installing $ ./build.sh -n librmm rmm # Build librmm and rmm without installing $ ./build.sh librmm rmm # Build and install librmm and rmm
To run tests (Optional):
bash $ cd build (if you are not already in build directory) $ make test
Build, install, and test the
rmmpython package, in the
pythonfolder:
bash $ python setup.py build_ext --inplace $ python setup.py install $ pytest -v
Done! You are ready to develop for the RMM OSS project.
RMM uses CPM.cmake to handle third-party dependencies like spdlog, Thrust, GoogleTest, GoogleBenchmark. In general you won't have to worry about it. If CMake finds an appropriate version on your system, it uses it (you can help it along by setting
CMAKE_PREFIX_PATHto point to the installed location). Otherwise those dependencies will be downloaded as part of the build.
If you frequently start new builds from scratch, consider setting the environment variable
CPM_SOURCE_CACHEto an external download directory to avoid repeated downloads of the third-party dependencies.
The installed RMM library provides a set of config files that makes it easy to integrate RMM into your own CMake project. In your
CMakeLists.txt, just add
find_package(rmm [VERSION]) # ... target_link_libraries( (PRIVATE|PUBLIC) rmm::rmm)
Since RMM is a header-only library, this does not actually link RMM, but it makes the headers available and pulls in transitive dependencies. If RMM is not installed in a default location, use
CMAKE_PREFIX_PATHor
rmm_ROOTto point to its location.
One of RMM's dependencies is the Thrust library, so the above automatically pulls in
Thrustby means of a dependency on the
rmm::Thrusttarget. By default it uses the standard configuration of Thrust. If you want to customize it, you can set the variables
THRUST_HOST_SYSTEMand
THRUST_DEVICE_SYSTEM; see Thrust's CMake documentation.
The first goal of RMM is to provide a common interface for device and host memory allocation. This allows both users and implementers of custom allocation logic to program to a single interface.
To this end, RMM defines two abstract interface classes: -
rmm::mr::device_memory_resourcefor device memory allocation -
rmm::mr::host_memory_resourcefor host memory allocation
These classes are based on the
std::pmr::memory_resourceinterface class introduced in C++17 for polymorphic memory allocation.
device_memory_resource
rmm::mr::device_memory_resourceis the base class that defines the interface for allocating and freeing device memory.
It has two key functions:
void* device_memory_resource::allocate(std::size_t bytes, cuda_stream_view s)
bytesbytes.
void device_memory_resource::deallocate(void* p, std::size_t bytes, cuda_stream_view s)
bytespointed to by
p.
pmust have been returned by a previous call to
allocate(bytes), otherwise behavior is undefined
It is up to a derived class to provide implementations of these functions. See available resources for example
device_memory_resourcederived classes.
Unlike
std::pmr::memory_resource,
rmm::mr::device_memory_resourcedoes not allow specifying an alignment argument. All allocations are required to be aligned to at least 256B. Furthermore,
device_memory_resourceadds an additional
cuda_stream_viewargument to allow specifying the stream on which to perform the (de)allocation.
cuda_stream_viewand
cuda_stream
rmm::cuda_stream_viewis a simple non-owning wrapper around a CUDA
cudaStream_t. This wrapper's purpose is to provide strong type safety for stream types. (
cudaStream_tis an alias for a pointer, which can lead to ambiguity in APIs when it is assigned
0.) All RMM stream-ordered APIs take a
rmm::cuda_stream_viewargument.
rmm::cuda_streamis a simple owning wrapper around a CUDA
cudaStream_t. This class provides RAII semantics (constructor creates the CUDA stream, destructor destroys it). An
rmm::cuda_streamcan never represent the CUDA default stream or per-thread default stream; it only ever represents a single non-default stream.
rmm::cuda_streamcannot be copied, but can be moved.
cuda_stream_pool
rmm::cuda_stream_poolprovides fast access to a pool of CUDA streams. This class can be used to create a set of
cuda_streamobjects whose lifetime is equal to the
cuda_stream_pool. Using the stream pool can be faster than creating the streams on the fly. The size of the pool is configurable. Depending on this size, multiple calls to
cuda_stream_pool::get_stream()may return instances of
rmm::cuda_stream_viewthat represent identical CUDA streams.
All current device memory resources are thread safe unless documented otherwise. More specifically, calls to memory resource
allocate()and
deallocate()methods are safe with respect to calls to either of these functions from other threads. They are not thread safe with respect to construction and destruction of the memory resource object.
Note that a class
thread_safe_resource_adapteris provided which can be used to adapt a memory resource that is not thread safe to be thread safe (as described above). This adapter is not needed with any current RMM device memory resources.
rmm::mr::device_memory_resourceis a base class that provides stream-ordered memory allocation. This allows optimizations such as re-using memory deallocated on the same stream without the overhead of synchronization.
A call to
device_memory_resource::allocate(bytes, stream_a)returns a pointer that is valid to use on
stream_a. Using the memory on a different stream (say
stream_b) is Undefined Behavior unless the two streams are first synchronized, for example by using
cudaStreamSynchronize(stream_a)or by recording a CUDA event on
stream_aand then calling
cudaStreamWaitEvent(stream_b, event).
The stream specified to
device_memory_resource::deallocateshould be a stream on which it is valid to use the deallocated memory immediately for another allocation. Typically this is the stream on which the allocation was last used before the call to
deallocate. The passed stream may be used internally by a
device_memory_resourcefor managing available memory with minimal synchronization, and it may also be synchronized at a later time, for example using a call to
cudaStreamSynchronize().
For this reason, it is Undefined Behavior to destroy a CUDA stream that is passed to
device_memory_resource::deallocate. If the stream on which the allocation was last used has been destroyed before calling
deallocateor it is known that it will be destroyed, it is likely better to synchronize the stream (before destroying it) and then pass a different stream to
deallocate(e.g. the default stream).
Note that device memory data structures such as
rmm::device_bufferand
rmm::device_uvectorfollow these stream-ordered memory allocation semantics and rules.
RMM provides several
device_memory_resourcederived classes to satisfy various user requirements. For more detailed information about these resources, see their respective documentation.
cuda_memory_resource
Allocates and frees device memory using
cudaMallocand
cudaFree.
managed_memory_resource
Allocates and frees device memory using
cudaMallocManagedand
cudaFree.
Note that
managed_memory_resourcecannot be used with NVIDIA Virtual GPU Software (vGPU, for use with virtual machines or hypervisors) because NVIDIA CUDA Unified Memory is not supported by NVIDIA vGPU.
pool_memory_resource
A coalescing, best-fit pool sub-allocator.
fixed_size_memory_resource
A memory resource that can only allocate a single fixed size. Average allocation and deallocation cost is constant.
binning_memory_resource
Configurable to use multiple upstream memory resources for allocations that fall within different bin sizes. Often configured with multiple bins backed by
fixed_size_memory_resources and a single
pool_memory_resourcefor allocations larger than the largest bin size.
RMM users commonly need to configure a
device_memory_resourceobject to use for all allocations where another resource has not explicitly been provided. A common example is configuring a
pool_memory_resourceto use for all allocations to get fast dynamic allocation.
To enable this use case, RMM provides the concept of a "default"
device_memory_resource. This resource is used when another is not explicitly provided.
Accessing and modifying the default resource is done through two functions: -
device_memory_resource* get_current_device_resource()- Returns a pointer to the default resource for the current CUDA device. - The initial default memory resource is an instance of
cuda_memory_resource. - This function is thread safe with respect to concurrent calls to it and
set_current_device_resource(). - For more explicit control, you can use
get_per_device_resource(), which takes a device ID.
device_memory_resource* set_current_device_resource(device_memory_resource* new_mr)
new_resource
new_resourceis
nullptr, then resets the default resource to
cuda_memory_resource
get_current_device_resource()
set_per_device_resource(), which takes a device ID.
rmm::mr::cuda_memory_resource cuda_mr; // Construct a resource that uses a coalescing best-fit pool allocator rmm::mr::pool_memory_resource<:mr::cuda_memory_resource> pool_mr{&cuda_mr}; rmm::mr::set_current_device_resource(&pool_mr); // Updates the current device resource pointer to `pool_mr` rmm::mr::device_memory_resource* mr = rmm::mr::get_current_device_resource(); // Points to `pool_mr`
A
device_memory_resourceshould only be used when the active CUDA device is the same device that was active when the
device_memory_resourcewas created. Otherwise behavior is undefined.
If a
device_memory_resourceis used with a stream associated with a different CUDA device than the device for which the memory resource was created, behavior is undefined.
Creating a
device_memory_resourcefor each device requires care to set the current device before creating each resource, and to maintain the lifetime of the resources as long as they are set as per-device resources. Here is an example loop that creates
unique_ptrs to
pool_memory_resourceobjects for each device and sets them as the per-device resource for that device.
std::vector> per_device_pools; for(int i = 0; i < N; ++i) { cudaSetDevice(i); // set device i before creating MR // Use a vector of unique_ptr to maintain the lifetime of the MRs per_device_pools.push_back(std::make_unique()); // Set the per-device resource for device i set_per_device_resource(cuda_device_id{i}, &per_device_pools.back()); }
C++ interfaces commonly allow customizable memory allocation through an
Allocatorobject. RMM provides several
Allocatorand
Allocator-like classes.
polymorphic_allocator
A stream-ordered allocator similar to
std::pmr::polymorphic_allocator. Unlike the standard C++
Allocatorinterface, the
allocateand
deallocatefunctions take a
cuda_stream_viewindicating the stream on which the (de)allocation occurs.
stream_allocator_adaptor
stream_allocator_adaptorcan be used to adapt a stream-ordered allocator to present a standard
Allocatorinterface to consumers that may not be designed to work with a stream-ordered interface.
Example: ```c++ rmm::cudastream stream; rmm::mr::polymorphicallocator stream_alloc;
// Constructs an adaptor that forwards all (de)allocations to
stream_allocon
stream. auto adapted = rmm::mr::makestreamallocatoradaptor(streamalloc, stream);
// Allocates 100 bytes using
stream_allocon
streamauto p = adapted.allocate(100); ... // Deallocates using
stream_allocon
streamadapted.deallocate(p,100); ```
thrust_allocator
thrust_allocatoris a device memory allocator that uses the strongly typed
thrust::device_ptr, making it usable with containers like
thrust::device_vector.
See below for more information on using RMM with Thrust.
device_buffer
An untyped, uninitialized RAII class for stream ordered device memory allocation.
cuda_stream_view s{...}; // Allocates at least 100 bytes on stream `s` using the *default* resource rmm::device_buffer b{100,s}; void* p = b.data(); // Raw, untyped pointer to underlying device memorykernel<<<... s.value>>>(b.data()); //
b
is only safe to use ons
rmm::mr::device_memory_resource * mr = new my_custom_resource{...}; // Allocates at least 100 bytes on stream
s
using the resourcemr
rmm::device_buffer b2{100, s, mr}; </...>
device_uvector
A typed, uninitialized RAII class for allocation of a contiguous set of elements in device memory. Similar to a
thrust::device_vector, but as an optimization, does not default initialize the contained elements. This optimization restricts the types
Tto trivially copyable types.
cuda_stream_view s{...}; // Allocates uninitialized storage for 100 `int32_t` elements on stream `s` using the // default resource rmm::device_uvector v(100, s); // Initializes the elements to 0 thrust::uninitialized_fill(thrust::cuda::par.on(s.value()), v.begin(), v.end(), int32_t{0});rmm::mr::device_memory_resource * mr = new my_custom_resource{...}; // Allocates uninitialized storage for 100
int32_t
elements on streams
using the resourcemr
rmm::device_uvector v2{100, s, mr};
device_scalar
A typed, RAII class for allocation of a single element in device memory. This is similar to a
device_uvectorwith a single element, but provides convenience functions like modifying the value in device memory from the host, or retrieving the value from device to host.
cuda_stream_view s{...}; // Allocates uninitialized storage for a single `int32_t` in device memory rmm::device_scalar a{s}; a.set_value(42, s); // Updates the value in device memory to `42` on stream `s`kernel<<<...>>>(a.data()); // Pass raw pointer to underlying element in device memory
int32_t v = a.value(s); // Retrieves the value from device to host on stream
s
</...>
host_memory_resource
rmm::mr::host_memory_resourceis the base class that defines the interface for allocating and freeing host memory.
Similar to
device_memory_resource, it has two key functions for (de)allocation:
void* device_memory_resource::allocate(std::size_t bytes, std::size_t alignment)
bytesbytes aligned to the specified
alignment
void device_memory_resource::deallocate(void* p, std::size_t bytes, std::size_t alignment)
bytespointed to by
p.
Unlike
device_memory_resource, the
host_memory_resourceinterface and behavior is identical to
std::pmr::memory_resource.
new_delete_resource
Uses the global
operator newand
operator deleteto allocate host memory.
pinned_memory_resource
Allocates "pinned" host memory using
cuda(Malloc/Free)Host.
RMM does not currently provide any data structures that interface with
host_memory_resource. In the future, RMM will provide a similar host-side structure like
device_bufferand an allocator that can be used with STL containers.
RAPIDS and other CUDA libraries make heavy use of Thrust. Thrust uses CUDA device memory in two situations:
thrust::device_vector, and
thrust::sort.
RMM provides
rmm::mr::thrust_allocatoras a conforming Thrust allocator that uses
device_memory_resources.
To instruct a Thrust algorithm to use
rmm::mr::thrust_allocatorto allocate temporary storage, you can use the custom Thrust CUDA device execution policy:
rmm::exec_policy(stream).
thrust::sort(rmm::exec_policy(stream, ...);
The first
streamargument is the
streamto use for
rmm::mr::thrust_allocator. The second
streamargument is what should be used to execute the Thrust algorithm. These two arguments must be identical.
RMM includes two forms of logging. Memory event logging and debug logging.
logging_resource_adaptor
Memory event logging writes details of every allocation or deallocation to a CSV (comma-separated value) file. In C++, Memory Event Logging is enabled by using the
logging_resource_adaptoras a wrapper around any other
device_memory_resourceobject.
Each row in the log represents either an allocation or a deallocation. The columns of the file are "Thread, Time, Action, Pointer, Size, Stream".
The CSV output files of the
logging_resource_adaptorcan be used as input to
REPLAY_BENCHMARK, which is available when building RMM from source, in the
gbenchmarksfolder in the build directory. This log replayer can be useful for profiling and debugging allocator issues.
The following C++ example creates a logging version of a
cuda_memory_resourcethat outputs the log to the file "logs/test1.csv".
std::string filename{"logs/test1.csv"}; rmm::mr::cuda_memory_resource upstream; rmm::mr::logging_resource_adaptor<:mr::cuda_memory_resource> log_mr{&upstream, filename};
If a file name is not specified, the environment variable
RMM_LOG_FILEis queried for the file name. If
RMM_LOG_FILEis not set, then an exception is thrown by the
logging_resource_adaptorconstructor.
In Python, memory event logging is enabled when the
loggingparameter of
rmm.reinitialize()is set to
True. The log file name can be set using the
log_file_nameparameter. See
help(rmm.reinitialize)for full details.
RMM includes a debug logger which can be enabled to log trace and debug information to a file. This information can show when errors occur, when additional memory is allocated from upstream resources, etc. The default log file is
rmm_log.txtin the current working directory, but the environment variable
RMM_DEBUG_LOG_FILEcan be set to specify the path and file name.
There is a CMake configuration variable
RMM_LOGGING_LEVEL, which can be set to enable compilation of more detailed logging. The default is
INFO. Available levels are
TRACE,
DEBUG,
INFO,
WARN,
ERROR,
CRITICALand
OFF.
The log relies on the spdlog library.
Note that to see logging below the
INFOlevel, the C++ application must also call
rmm::logger().set_level(), e.g. to enable all levels of logging down to
TRACE, call
rmm::logger().set_level(spdlog::level::trace)(and compile with
-DRMM_LOGGING_LEVEL=TRACE).
Note that debug logging is different from the CSV memory allocation logging provided by
rmm::mr::logging_resource_adapter. The latter is for logging a history of allocation / deallocation actions which can be useful for replay with RMM's replay benchmark.
There are two ways to use RMM in Python code:
rmm.DeviceBufferAPI to explicitly create and manage device memory allocations
RMM provides a
MemoryResourceabstraction to control how device memory is allocated in both the above uses.
A DeviceBuffer represents an untyped, uninitialized device memory allocation. DeviceBuffers can be created by providing the size of the allocation in bytes:
>>> import rmm >>> buf = rmm.DeviceBuffer(size=100)
The size of the allocation and the memory address associated with it can be accessed via the
.sizeand
.ptrattributes respectively:
>>> buf.size 100 >>> buf.ptr 140202544726016
DeviceBuffers can also be created by copying data from host memory:
>>> import rmm >>> import numpy as np >>> a = np.array([1, 2, 3], dtype='float64') >>> buf = rmm.to_device(a.tobytes()) >>> buf.size 24
Conversely, the data underlying a DeviceBuffer can be copied to the host:
>>> np.frombuffer(buf.tobytes()) array([1., 2., 3.])
MemoryResourceobjects are used to configure how device memory allocations are made by RMM.
By default if a
MemoryResourceis not set explicitly, RMM uses the
CudaMemoryResource, which uses
cudaMallocfor allocating device memory.
rmm.reinitialize()provides an easy way to initialize RMM with specific memory resource options across multiple devices. See
help(rmm.reinitialize)for full details.
For lower-level control, the
rmm.mr.set_current_device_resource()function can be used to set a different MemoryResource for the current CUDA device. For example, enabling the
ManagedMemoryResourcetells RMM to use
cudaMallocManagedinstead of
cudaMallocfor allocating memory:
>>> import rmm >>> rmm.mr.set_current_device_resource(rmm.mr.ManagedMemoryResource())
:warning: The default resource must be set for any device before allocating any device memory on that device. Setting or changing the resource after device allocations have been made can lead to unexpected behaviour or crashes. See Multiple Devices
As another example,
PoolMemoryResourceallows you to allocate a large "pool" of device memory up-front. Subsequent allocations will draw from this pool of already allocated memory. The example below shows how to construct a PoolMemoryResource with an initial size of 1 GiB and a maximum size of 4 GiB. The pool uses
CudaMemoryResourceas its underlying ("upstream") memory resource:
>>> import rmm >>> pool = rmm.mr.PoolMemoryResource( ... upstream=rmm.mr.CudaMemoryResource(), ... initial_pool_size=2**30, ... maximum_pool_size=2**32 ... ) >>> rmm.mr.set_current_device_resource(pool)
Other MemoryResources include:
FixedSizeMemoryResourcefor allocating fixed blocks of memory
BinningMemoryResourcefor allocating blocks within specified "bin" sizes from different memory resources
MemoryResources are highly configurable and can be composed together in different ways. See
help(rmm.mr)for more information.
You can configure CuPy to use RMM for memory allocations by setting the CuPy CUDA allocator to
rmm_cupy_allocator:
>>> import rmm >>> import cupy >>> cupy.cuda.set_allocator(rmm.rmm_cupy_allocator)
You can configure Numba to use RMM for memory allocations using the Numba EMM Plugin.
This can be done in two ways:
NUMBA_CUDA_MEMORY_MANAGER:
$ NUMBA_CUDA_MEMORY_MANAGER=rmm python (args)
set_memory_manager()function provided by Numba:
>>> from numba import cuda >>> import rmm >>> cuda.set_memory_manager(rmm.RMMNumbaManager)