Need help with PointASNL?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

yanx27
182 Stars 22 Forks MIT License 25 Commits 16 Opened issues

Description

PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks with Adaptive Sampling (CVPR 2020)

Services available

!
?

Need anything else?

Contributors list

# 25,426
pytorch
Python
pointne...
shapene...
22 commits

PointASNL

This repository is for PointASNL introduced in the following paper

Xu Yan, Chaoda Zheng, Zhen Li*, Sheng Wang and Shuguang Cui, "PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks with Adaptive Sampling", CVPR 2020 [arxiv].

If you find our work useful in your research, please consider citing:

@inproceedings{yan2020pointasnl,
  title={Pointasnl: Robust point clouds processing using nonlocal neural networks with adaptive sampling},
  author={Yan, Xu and Zheng, Chaoda and Li, Zhen and Wang, Sheng and Cui, Shuguang},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={5589--5598},
  year={2020}
}

Getting Started

(1) Set up

Clone the repository:

git clone https://github.com/yanx27/PointASNL.git

Installation instructions for Ubuntu 16.04 (available at CUDA10):

  • Make sure CUDA and cuDNN are installed. Only this configurations has been tested:

    • Python 3.6.9, TensorFlow 1.13.1, CUDA 10.1
  • Follow Tensorflow installation procedure.

  • Compile the customized Tensorflow operators by

    sh complile_op.sh
    . N.B. If you installed Tensorflow in a virtual environment, it needs to be activated when running these scripts

(2) ModelNet40 Classification

Aligned ModelNet40 dataset can be found here. Since the randomness of data augmentation, the result of this code maybe slightly different from the result in paper, but it should be around 93%.

Data without Noise

It will cost relatively long time in first epoch for cache construction. ```

Training

$ python train.py --data [MODELNET40 PATH] --expdir PointASNLwithout_noise

Evaluation

$ python test.py --data [MODELNET40 PATH] --modelpath log/PointASNLwithoutnoise/bestmodel.ckpt ```

Data with Noise

Model with AS module is extremely robust for noisy data. You can use adaptive sampling by setting

--AS
. ```

Training

$ python train.py --data [MODELNET40 PATH] --expdir PointASNLwith_noise --AS

Evaluation on noisy data

$ python test.py --data [MODELNET40 PATH] --modelpath log/PointASNLwithnoise/bestmodel.ckpt --AS --noise ```

(3) ScanNet Segmentation

We provide two options for training on ScanNet dataset (with or without pre/post processing). With grid sampling processing, more input points and deeper network structure, our PointASNL can achieve 66.6% on ScanNet benchmark.

Data Preparation

Official ScanNet dataset can be downloaded here. If you choose training without grid sampling, you need firstly run

ScanNet/prepare_scannet.py
, otherwise you can skip to training step.

Data without Processing

This method converges relatively slower, and will achieve result around 63%. ```

Training

$ cd ScanNet/ $ python trainscannet.py --data [SCANNET PATH] --logdir PointASNL

Evaluation

$ cd ScanNet/ $ python testscannet.py --data [SCANNET PATH] --modelpath log/PointASNL/latest_model.ckpt ```

Data with Grid Sampling

We highly recommend training with this method, although it takes a long time to process the raw data, it can achieve results around 66% and will be faster to converge. Grid sampling pre-processing will be automatically conducted before training. ```

Training

$ cd ScanNet/ $ python trainscannetgrid.py --data [SCANNET PATH] --logdir PointASNLgrid --numpoint 10240 --model pointasnlsemsegres --in_radius 2

Evaluation

$ cd ScanNet/ $ python testscannetgrid.py --data [SCANNET PATH] --modelpath log/PointASNLgrid/latest_model.ckpt ```

Pre-trained Model

| Model | mIoU | Download | | ------------- | ------------- | ------------------------------------------------------------ | | pointasnlsemseg_res | 66.93 | ckpt-163.9M |

(4) SemanticKITTI Segmentation

  • SemanticKITTI dataset can be found here. Download the files related to semantic segmentation and extract everything into the same folder.
  • We add codes with grid sampling processing, which can achieve better result of around 52% (using
    --prepare_data
    just in the first running).
  • Please using official semantickittiapi for evaluation. ``` # Training $ cd SemanticKITTI/ $ python trainsemantickitti.py --data [SemanticKITTI PATH] --logdir PointASNL --withremission # or $ python trainsemantickittigrid.py --data [SemanticKITTI PATH] --logdir PointASNLgrid --preparedata

Evaluation

$ cd SemanticKITTI/ $ python testsemantickitti.py --data [SemanticKITTI PATH] --modelpath log/PointASNL/latestmodel.ckpt --with_remission

or

$ python testsemantickittigrid.py --data [SemanticKITTI PATH] --modelpath log/PointASNLgrid/bestmodel.ckpt --test_area [e.g., 08]

Acknowledgement

  • The original code is borrowed from PointNet++ and PointConv.
  • The code with grid sampling is borrowed from KPConv and RandLA-Net.
  • The kd-tree tool is from nanoflann.

    License

    This repository is released under MIT License (see LICENSE file for details).

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.