Spatial Sparse Convolution in PyTorch
The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:
This is a spatially sparse convolution library like SparseConvNet but faster and easy to read. This library provide sparse convolution/transposed, submanifold convolution, inverse convolution and sparse maxpool.
2020-5-2, we add ConcatTable, JoinTable, AddTable, and Identity function to build ResNet and Unet in this version of spconv.
docker pull scrin/dev-spconv, contains python 3.8, cuda 10.1, fish shell, newest pytorch and tensorflow.
git clone xxx.git --recursiveto clone this repo.
Install boost headers to your system include path, you can use either
sudo apt-get install libboost-all-devor download compressed files from boost official website and copy headers to include path.
Download cmake >= 3.13.2, then add cmake executables to PATH.
Ensure you have installed pytorch 1.0+ in your environment, run
python setup.py bdist_wheel(don't use
python setup.py install).
cd ./dist, use pip to install generated whl file.
SparseConvNet's Sparse Convolution don't support padding and dilation, spconv support this.
spconv only contains sparse convolutions, the batchnorm and activations can directly use layers from torch.nn, SparseConvNet contains lots of their own implementation of layers such as batchnorm and activations.
features = # your features with shape [N, numPlanes] indices = # your indices/coordinates with shape [N, ndim + 1], batch index must be put in indices[:, 0] spatial_shape = # spatial shape of your sparse tensor, spatial_shape[i] is shape of indices[:, 1 + i]. batch_size = # batch size of your sparse tensor. x = spconv.SparseConvTensor(features, indices, spatial_shape, batch_size) x_dense_NCHW = x.dense() # convert sparse tensor to dense NCHW tensor. print(x.sparity) # helper function to check sparity.
import spconv from torch import nn class ExampleNet(nn.Module): def __init__(self, shape): super().__init__() self.net = spconv.SparseSequential( spconv.SparseConv3d(32, 64, 3), # just like nn.Conv3d but don't support group and all([d > 1, s > 1]) nn.BatchNorm1d(64), # non-spatial layers can be used directly in SparseSequential. nn.ReLU(), spconv.SubMConv3d(64, 64, 3, indice_key="subm0"), nn.BatchNorm1d(64), nn.ReLU(), # when use submanifold convolutions, their indices can be shared to save indices generation time. spconv.SubMConv3d(64, 64, 3, indice_key="subm0"), nn.BatchNorm1d(64), nn.ReLU(), spconv.SparseConvTranspose3d(64, 64, 3, 2), nn.BatchNorm1d(64), nn.ReLU(), spconv.ToDense(), # convert spconv tensor to dense and convert it to NCHW format. nn.Conv3d(64, 64, 3), nn.BatchNorm1d(64), nn.ReLU(), ) self.shape = shape
def forward(self, features, coors, batch_size): coors = coors.int() # unlike torch, this library only accept int coordinates. x = spconv.SparseConvTensor(features, coors, self.shape, batch_size) return self.net(x)# .dense()
Inverse sparse convolution means "inv" of sparse convolution. the output of inverse convolution contains same indices as input of sparse convolution.
Inverse convolution usually used in semantic segmentation.
class ExampleNet(nn.Module): def __init__(self, shape): super().__init__() self.net = spconv.SparseSequential( spconv.SparseConv3d(32, 64, 3, 2, indice_key="cp0"), spconv.SparseInverseConv3d(64, 32, 3, indice_key="cp0"), # need provide kernel size to create weight ) self.shape = shape
def forward(self, features, coors, batch_size): coors = coors.int() x = spconv.SparseConvTensor(features, coors, self.shape, batch_size) return self.net(x)
voxel_generator = spconv.utils.VoxelGenerator( voxel_size=[0.1, 0.1, 0.1], point_cloud_range=[-50, -50, -3, 50, 50, 1], max_num_points=30, max_voxels=40000 )
points = # [N, 3+] tensor. voxels, coords, num_points_per_voxel = voxel_generator.generate(points)
This implementation use gather-gemm-scatter framework to do sparse convolution.
Yan Yan - Initial work - traveller59
Bo Li - gpu indice generation idea, owner of patent of the sparse conv gpu indice generation algorithm (don't include subm) - prclibo
CUDPP: A cuda library. contains a cuda hash implementation.
robin-map: A fast c++ hash library. almost 2x faster than std::unordered_map in this project.
pybind11: A head-only python c++ binding library.
prettyprint: A head-only library for container print.
This project is licensed under the Apache license 2.0 License - see the LICENSE.md file for details
The CUDPP hash code is licensed under BSD License.
The robin-map code is licensed under MIT license.