by MightyChaos

MightyChaos / LKVOLearner

Learning Depth from Monocular Videos using Direct Methods, CVPR 2018

206 Stars 37 Forks Last release: Not found BSD 3-Clause "New" or "Revised" License 26 Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

Learning Depth from Monocular Videos using Direct Methods

Implementation of the methods in "Learning Depth from Monocular Videos using Direct Methods". If you find this code useful, please cite our paper:

author = {Wang, Chaoyang and Miguel Buenaposada, José and Zhu, Rui and Lucey, Simon},
title = {Learning Depth From Monocular Videos Using Direct Methods},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}


  • Python 3.6
  • PyTorch 0.3.1 (latter or eariler version of Pytorch is non-compatible.)

  • visdom, dominate


data preparation

We refer "SfMLeaner" to prepare the training data from KITTI. We assume the processed data is put in directory "./data_kitti/".

training with different pose prediction modules

Start visdom server before for inspecting learning progress before starting the training process.

python -m visdom.server -port 8009
1. #### train from scratch with PoseNet
see for details.
  1. #### finetune with DDVO Use pretrained posenet to give initialization for DDVO. Corresponds to the results reported as "PoseNet+DDVO" in the paper.
    see for details.


  • Pretrained depth network reported as "Posenet-DDVO(CS+K)" in the paper [download].
  • Depth prediction results on KITTI eigen test split(see Table 1 in the paper): [Posenet(K)], [DDVO(K)], [Posenet+DDVO(K)],[Posenet+DDVO(CS+K)]

  • To test yourself:

    CUDA_VISIBLE_DEVICES=0 nice -10 python src/ --dataset_root $DATAROOT --ckpt_file $CKPT --output_path $OUTPUT --test_file_list test_files_eigen.txt


We again refer to "SfMLeaner" for their evaluation code.


Part of the code structure is borrowed from "Pytorch CycleGAN"

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.