This is an unofficial inplementation of VoxelNet in TensorFlow.
The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:
This is an unofficial inplementation of VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection in TensorFlow. A large part of this project is based on the work here. Thanks to @jeasinema. This work is a modified version with bugs fixed and better experimental settings to chase the results reported in the paper (still ongoing).
TensorFlow(tested on 1.4.1)
bash $ python3 setup.py build_ext --inplace
bash $ cd kitti_eval $ g++ -o evaluate_object_3d_offline evaluate_object_3d_offline.cpp
bash $ cd kitti_eval $ chmod +x launch_test.sh
Download the 3D KITTI detection dataset from here. Data to download include:
In this project, we use the cropped point cloud data for training and validation. Point clouds outside the image coordinates are removed. Update the directories in
data/crop.pyto generate cropped data. Note that cropped point cloud data will overwrite raw point cloud data.
Split the training set into training and validation set according to the protocol here. And rearrange the folders to have the following structure:
plain └── DATA_DIR ├── training
Update the dataset directory in
train.pywith desired hyper-parameters to start training:
bash $ python3 train.py --alpha 1 --beta 10Note that the hyper-parameter settings introduced in the paper are not able to produce high quality results. So, a different setting is specified here.
Training on two Nvidia 1080 Ti GPUs takes around 3 days (160 epochs as reported in the paper). During training, training statistics are recorded in
log/default, which can be monitored by tensorboard. And models are saved in
save_model/default. Intermediate validation results will be dumped into the folder
XXXas the epoch number. And metrics will be calculated and saved in
predictions/XXX/log. If the
--visflag is set to be
True, visualizations of intermediate results will be dumped in the folder
When the training is done, executing
parse_log.pywill generate the learning curve.
bash $ python3 parse_log.py predictions
There is a pre-trained model for car in
test.py -n defaultto produce final predictions on the validation set after training is done. Change
pre_trained_carwill start testing for the pre-trained model (only car model provided for now).
bash $ python3 test.pyresults will be dumped into
predictions/data. Set the
--visflag to True if dumping visualizations and they will be saved into
run the following command to measure quantitative performances of predictions:
bash $ ./kitti_eval/evaluate_object_3d_offline [DATA_DIR]/validation/label_2 ./predictions
The current implementation and training scheme are able to produce results in the tables below.
| Car | Easy | Moderate | Hard | |:-:|:-:|:-:|:-:| | Reported | 89.60 | 84.81 | 78.57 | | Reproduced | 85.41 | 83.16 | 77.10 |
| Car | Easy | Moderate | Hard | |:-:|:-:|:-:|:-:| | Reported | 81.97 | 65.46 | 62.85 | | Reproduced | 53.43 | 48.78 | 48.06 |