fastSceneUnderstanding

by davyneven

segmentation, instance segmentation and single image depth

201 Stars 56 Forks Last release: Not found 9 Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

Fast Scene Understanding

Torch implementation for simultaneous image segmentation, instance segmentation and single image depth. Two videos can be found here and here.

Example:

If you use this code for your research, please cite our papers:

Fast Scene Understanding for Autonomous Driving
Davy Neven, Bert De Brabandere, Stamatios Georgoulis, Marc Proesmans and Luc Van Gool
Published at "Deep Learning for Vehicle Perception", workshop at the IEEE Symposium on Intelligent Vehicles 2017

and

Semantic Instance Segmentation with a Discriminative Loss Function
Bert De Brabandere, Davy Neven and Luc Van Gool
Published at "Deep Learning for Robotic Vision", workshop at CVPR 2017

Setup

Prerequisites

Torch dependencies:

Data dependencies:

Download Cityscapes and run the script

createTrainIdLabelImgs.py
and
createTrainIdInstanceImgs.py
to create annotations based on the training labels. Make sure that the folder is named cityscapes

Afterwards create following txt files:

cd CITYSCAPES_FOLDER

ls leftImg8bit/train//.png > trainImages.txt ls leftImg8bit/val//.png > valImages.txt

ls gtFine/train/*/labelTrainIds.png > trainLabels.txt ls gtFine/val//*labelTrainIds.png.png > valLabels.txt

ls gtFine/train/*/instanceTrainIds.png > trainInstances.txt ls gtFine/val//*instanceTrainIds.png.png > valInstances.txt

ls disparity/train//.png > trainDepth.txt ls disparity/val//.png.png > valDepth.txt

Download pretrained model

To download both the pretrained segmentation model for training and a trained model for testing, run:

sh download.sh

Test pretrained model

To test the pretrained model, make sure you downloaded both the pretrained model and the Cityscapes dataset (+ scripts and txt files, see above). After, run:

qlua test.lua -data_root CITYSCAPES_ROOT
with CITYSCAPES_ROOT the folder where cityscapes is located. For other options, see test_opts.lua

Train your own model

To train your own model, run:

qlua main.lua -data_root CITYSCAPES_ROOT -save true -directory PATH_TO_SAVE

For other options, see opts.lua

Tensorflow code

A third party tensorflow implementation of our loss function applied to lane instance segmentation is available from Hanqiu Jiang's github repository.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.