Need help with MVSNet_pytorch?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

xy-guo
222 Stars 52 Forks 2 Commits 17 Opened issues

Description

PyTorch Implementation of MVSNet

Services available

!
?

Need anything else?

Contributors list

No Data

An Unofficial Pytorch Implementation of MVSNet

MVSNet: Depth Inference for Unstructured Multi-view Stereo. Yao Yao, Zixin Luo, Shiwei Li, Tian Fang, Long Quan. ECCV 2018. MVSNet is a deep learning architecture for depth map inference from unstructured multi-view images.

This is an unofficial Pytorch implementation of MVSNet

How to Use

Environment

  • python 3.6 (Anaconda)
  • pytorch 1.0.1

Training

  • Download the preprocessed DTU training data (Fixed training cameras, from Original MVSNet), and upzip it as the
    MVS_TRANING
    folder
  • in
    train.sh
    , set
    MVS_TRAINING
    as your training data path
  • create a logdir called
    checkpoints
  • Train MVSNet:
    ./train.sh

Testing

  • Download the preprocessed test data DTU testing data (from Original MVSNet) and unzip it as the
    DTU_TESTING
    folder, which should contain one
    cams
    folder, one
    images
    folder and one
    pair.txt
    file.
  • in
    test.sh
    , set
    DTU_TESTING
    as your testing data path and
    CKPT_FILE
    as your checkpoint file. You can also download my pretrained model.
  • Test MVSNet:
    ./test.sh

Fusion

in

eval.py
, I implemented a simple version of depth map fusion. Welcome contributions to improve the code.

Results on DTU

| | Acc. | Comp. | Overall. | |-----------------------|--------|--------|----------| | MVSNet(D=256) | 0.396 | 0.527 | 0.462 | | PyTorch-MVSNet(D=192) | 0.4492 | 0.3796 | 0.4144 |

Due to the memory limit, we only train the model with

D=192
, the fusion code is also different from the original repo.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.