Need help with DeepHuman?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

241 Stars 31 Forks Other 30 Commits 8 Opened issues


Code for our ICCV paper "DeepHuman: 3D Human Reconstruction from a Single Image"

Services available


Need anything else?

Contributors list

# 329,862
30 commits

DeepHuman: 3D Human Reconstruction from a Single Image

Zerong Zheng, Tao Yu, Yixuan Wei, Qionghai Dai, Yebin Liu. ICCV 2019

[Project Page] [Paper] [Dataset]




  • python 2.7
  • numpy
  • tensorflow-gpu
  • opendr
  • opencv-python


  1. Setup im2smpl in

    according to this guidance
  2. Clone this repository and install required libraries:

    cd path/to/deephuman
    git clone
    cd DeepHuman
    virtualenv deephuman_env
    source deephuman_env/bin/activate
    pip install -r requirements.txt
  3. Build voxelizer:

    cd voxelizer 
    mkdir build & cd build
    cmake ..
  4. Change the path configuration at LINE

  5. Download our pre-trained model:

    mkdir results & cd results
    tar -xzf results_final_19_09_30_10_29_33.tar.gz


Prepare your image (tightly cropping and resizing to 512x512 where the height of the person is roughly 450px) and run:

python2 --file ./examples/img.jpg
python2 --file ./examples/img.jpg


  1. Download THUman dataset and upzip it into

    directory. Then update
    accordingly. We provide one sample data item for reference.
  2. Prepare your own background images in

    and update
    accordingly. You can use images from LSUN dataset. We provide one sample image item for reference.
  3. Run the following command to generate training data:

    cd TrainingDataPreparation/
    python2    # generate training images
    python2  # generate input/output volumes
  4. Define your own training/testing split in LINE

    . After that, run the following command to train the network:

NOTE: Due to the inherent limitation of our data capturing system, the THUman dataset doesn't contain enough human models with loose clothes like skirts, dresses, coats, etc. If you want to make the network more general and robust for different garments, you may need to collect more data from other sources such as RenderPeople, 3DPeople, DeepFashion3D, etc.


Please read carefully the following terms and conditions and any accompanying documentation before you download and/or use DeepHuman Software/Data (the "Software"). By downloading and/or using the Software, you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Software.


The Software has been developed at the Tsinghua University and is owned by and proprietary material of the Tsinghua University.

License Grant

Tsinghua University grants you a non-exclusive, non-transferable, free of charge right:

To download the Software and use it on computers owned, leased or otherwise controlled by you and/or your organisation;

To use the Software for the sole purpose of performing non-commercial scientific research, non-commercial education, or non-commercial artistic projects.

Any other use, in particular any use for commercial purposes, is prohibited. This includes, without limitation, incorporation in a commercial product, use in a commercial service, as training data for a commercial product, for commercial ergonomic analysis (e.g. product design, architectural design, etc.), or production of other artifacts for commercial purposes including, for example, web services, movies, television programs, mobile applications, or video games. The Software may not be used for pornographic purposes or to generate pornographic material whether commercial or not. This license also prohibits the use of the Software to train methods/algorithms/neural networks/etc. for commercial use of any kind. The Software may not be reproduced, modified and/or made available in any form to any third party without Tsinghua University’s prior written permission. By downloading the Software, you agree not to reverse engineer it.

Disclaimer of Representations and Warranties

You expressly acknowledge and agree that the Software results from basic research, is provided “AS IS”, may contain errors, and that any use of the Software is at your sole risk. TSINGHUA UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE SOFTWARE, NEITHER EXPRESS NOR IMPLIED, AND THE ABSENCE OF ANY LEGAL OR ACTUAL DEFECTS, WHETHER DISCOVERABLE OR NOT. Specifically, and not to limit the foregoing, Tsinghua University makes no representations or warranties (i) regarding the merchantability or fitness for a particular purpose of the Software, (ii) that the use of the Software will not infringe any patents, copyrights or other intellectual property rights of a third party, and (iii) that the use of the Software will not cause any damage of any kind to you or a third party.

Limitation of Liability

Under no circumstances shall Tsinghua University be liable for any incidental, special, indirect or consequential damages arising out of or relating to this license, including but not limited to, any lost profits, business interruption, loss of programs or other data, or all other commercial damages or losses, even if advised of the possibility thereof.

No Maintenance Services

You understand and agree that Tsinghua University is under no obligation to provide either maintenance services, update services, notices of latent defects, or corrections of defects with regard to the Software. Tsinghua University nevertheless reserves the right to update, modify, or discontinue the Software at any time.

Publication with the Software

You agree to cite the paper describing the software and algorithm as specified on the download website.

Media Projects with the Software

When using the Software in a media project please give credit to Tsinghua University. For example: the Software was used for performance capture courtesy of the Tsinghua University.

Commercial Licensing Opportunities

For commercial use and commercial license please contact: [email protected]


If you use this code for your research, please consider citing:

    author = {Zheng, Zerong and Yu, Tao and Wei, Yixuan and Dai, Qionghai and Liu, Yebin},
    title = {DeepHuman: 3D Human Reconstruction From a Single Image},
    booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
    month = {October},
    year = {2019}


We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.