Need help with CalibNet?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

epiception
132 Stars 40 Forks MIT License 14 Commits 6 Opened issues

Description

Self-Supervised Extrinsic Calibration using 3D Spatial Transformer Networks

Services available

!
?

Need anything else?

Contributors list

No Data

CalibNet

Code for our paper: CalibNet: Self-Supervised Extrinsic Calibration using 3D Spatial Transformer Networks

Check out our project page!

CalibNet_gif1

Prerequisites

CalibNet is trained on Tensorflow 1.3, CUDA 8.0, CUDNN 7.0.1

Installation

The code for point cloud distance loss is modified from PU-NET, PointNet++, PointSetGeneration.

This repository, thus, is based on Tensorflow and the TF operators from PointNet++ and PU-NET.

For installing tensorflow, please follow the official instructions in here. The code is tested under TF1.3 and Python 2.7 on Ubuntu 16.04.

For compiling TF operators, please check tfxxxcompile.sh under each op subfolder in code/tfops folder, and change the path correctly to ../path/to/tensorflow/include. Note that you need to update nvcc, python and tensoflow include library if necessary. You also need to remove -DGLIBCXXUSECXX11_ABI=0 flag in g++ command in order to compile correctly if necessary.

We are working to update the code and installation steps for the latest tensorflow versions.

Dataset Preparation

To prepare the dataset, run /datasetfiles/datasetbuilderparallel.sh in the directory where you wish to store. We will also create a parser `parsedset.txt` for the dataset, that contains the file names for training.

git clone https://github.com/epiception/CalibNet.git
or
svn checkout https://github.com/CalibNet/trunk/code (for the code)
cd ../path/to/dataset_directory
bash ../path/to/code_folder/dataset_files/dataset_builder_parallel.sh
cd ../path/to/CalibNet/code
python dataset_files/parser.py ../dataset_directory/2011_09_26/
Resnet-18

Pretrained Resnet-18 parameters can be found here.

Training

Before training, be sure to make requisite changes to the paths and training parameters in the config file

config_res.py
. We trained using 2 GPUs. The base code is written to support ops for the same device configuration.

To begin training:

CUDA_VISIBLE_DEVICES=, python -B train_model_combined.py
Trained Weights

Trained weights for the base variant (non-iterative) model is available here. This model was trained for 44 epochs. As mentioned in the paper, the iterative realignment model for better translation outputs will uploaded soon.

Evaluation/Test

Code for Direct Evaluation/Testing pipeline for point cloud calibration will be uploaded soon.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.