Need help with spconv?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

502 Stars 152 Forks Apache License 2.0 72 Commits 198 Opened issues


Spatial Sparse Convolution in PyTorch

Services available


Need anything else?

Contributors list

# 28,716
51 commits
# 407,707
4 commits
# 21,310
2 commits
# 541,990
1 commit
# 545,444
1 commit

SpConv: PyTorch Spatially Sparse Convolution Library

Build Status

This is a spatially sparse convolution library like SparseConvNet but faster and easy to read. This library provide sparse convolution/transposed, submanifold convolution, inverse convolution and sparse maxpool.

2020-5-2, we add ConcatTable, JoinTable, AddTable, and Identity function to build ResNet and Unet in this version of spconv.


docker pull scrin/dev-spconv
, contains python 3.8, cuda 10.1, fish shell, newest pytorch and tensorflow.

Install on Ubuntu 16.04/18.04

  • if you are using pytorch 1.4+ and encounter "nvcc fatal: unknown -Wall", you need to go to torch package dir and remove flags contains "-Wall" in INTERFACECOMPILEOPTIONS in Caffe2Targets.cmake. This problem can't be fixed in this project (to avoid this, I need to remove all torch dependency in cuda sources and drop half support).
  1. Use

    git clone xxx.git --recursive
    to clone this repo.
  2. Install boost headers to your system include path, you can use either

    sudo apt-get install libboost-all-dev
    or download compressed files from boost official website and copy headers to include path.
  3. Download cmake >= 3.13.2, then add cmake executables to PATH.

  4. Ensure you have installed pytorch 1.0+ in your environment, run

    python bdist_wheel
    (don't use
    python install
  5. Run

    cd ./dist
    , use pip to install generated whl file.

Install on Windows 10 (Not supported for now)

Compare with SparseConvNet


  • SparseConvNet's Sparse Convolution don't support padding and dilation, spconv support this.

  • spconv only contains sparse convolutions, the batchnorm and activations can directly use layers from torch.nn, SparseConvNet contains lots of their own implementation of layers such as batchnorm and activations.


  • spconv is faster than SparseConvNet due to gpu indice generation and gather-gemm-scatter algorithm. SparseConvNet use hand-written gemm which is slow.



features = # your features with shape [N, numPlanes]
indices = # your indices/coordinates with shape [N, ndim + 1], batch index must be put in indices[:, 0]
spatial_shape = # spatial shape of your sparse tensor, spatial_shape[i] is shape of indices[:, 1 + i].
batch_size = # batch size of your sparse tensor.
x = spconv.SparseConvTensor(features, indices, spatial_shape, batch_size)
x_dense_NCHW = x.dense() # convert sparse tensor to dense NCHW tensor.
print(x.sparity) # helper function to check sparity. 

Sparse Convolution

import spconv
from torch import nn
class ExampleNet(nn.Module):
    def __init__(self, shape):
        super().__init__() = spconv.SparseSequential(
            spconv.SparseConv3d(32, 64, 3), # just like nn.Conv3d but don't support group and all([d > 1, s > 1])
            nn.BatchNorm1d(64), # non-spatial layers can be used directly in SparseSequential.
            spconv.SubMConv3d(64, 64, 3, indice_key="subm0"),
            # when use submanifold convolutions, their indices can be shared to save indices generation time.
            spconv.SubMConv3d(64, 64, 3, indice_key="subm0"),
            spconv.SparseConvTranspose3d(64, 64, 3, 2),
            spconv.ToDense(), # convert spconv tensor to dense and convert it to NCHW format.
            nn.Conv3d(64, 64, 3),
        self.shape = shape

def forward(self, features, coors, batch_size):
    coors = # unlike torch, this library only accept int coordinates.
    x = spconv.SparseConvTensor(features, coors, self.shape, batch_size)
    return .dense()

Inverse Convolution

Inverse sparse convolution means "inv" of sparse convolution. the output of inverse convolution contains same indices as input of sparse convolution.

Inverse convolution usually used in semantic segmentation.

class ExampleNet(nn.Module):
    def __init__(self, shape):
        super().__init__() = spconv.SparseSequential(
            spconv.SparseConv3d(32, 64, 3, 2, indice_key="cp0"),
            spconv.SparseInverseConv3d(64, 32, 3, indice_key="cp0"), # need provide kernel size to create weight
        self.shape = shape

def forward(self, features, coors, batch_size):
    coors =
    x = spconv.SparseConvTensor(features, coors, self.shape, batch_size)

Utility functions

  • convert point cloud to voxel
voxel_generator = spconv.utils.VoxelGenerator(
    voxel_size=[0.1, 0.1, 0.1], 
    point_cloud_range=[-50, -50, -3, 50, 50, 1],

points = # [N, 3+] tensor. voxels, coords, num_points_per_voxel = voxel_generator.generate(points)

Implementation Details

This implementation use gather-gemm-scatter framework to do sparse convolution.

Projects using spconv:


  • Yan Yan - Initial work - traveller59

  • Bo Li - gpu indice generation idea, owner of patent of the sparse conv gpu indice generation algorithm (don't include subm) - prclibo

Third party libraries

  • CUDPP: A cuda library. contains a cuda hash implementation.

  • robin-map: A fast c++ hash library. almost 2x faster than std::unordered_map in this project.

  • pybind11: A head-only python c++ binding library.

  • prettyprint: A head-only library for container print.


This project is licensed under the Apache license 2.0 License - see the file for details

The CUDPP hash code is licensed under BSD License.

The robin-map code is licensed under MIT license.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.