Need help with MGCNet?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

234 Stars 40 Forks MIT License 50 Commits 6 Opened issues


Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency[ECCV 2020]

Services available


Need anything else?

Contributors list

No Data

Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency(ECCV 2020)

This is an official python implementation of MGCNet. This is the pre-print version


  1. video

  2. image image

  3. Full video can be seen in YouTube

Running code

1. Code + Requirement + thirdlib

We run the code with python3.7, tensorflow 1.13

git clone --recursive
cd MGCNet
(sudo) pip install -r requirement.txt
(1) For render loss(reconstruction loss), we use the differential renderer named tfmeshrender. I find many issue happens here, so let's make this more clear. The tfmeshrender does not return triangle id for each pixel after rasterise, we do this by our self and add these changes as submodule to mgcnet.

(2) Then how to compile tfmeshrender, my setting is bazel==10.1, gcc==5., the compile command is

bazel build ...
The gcc/g++ version higher than 5. will bring problems, a good solution is virtual environment with a gcc maybe 5.5. If the The gcc/g++ version is 4.* that you can try to change the compile cmd in BUILD file, about the flag -DGLIBCXXUSECXX11ABI=0 or -DGLIBCXXUSECXX11ABI=1 for 4.* or 5.*


  1. 3dmm model + network weight

We include BFM09/BFM09 expression, BFM09 face region from DengYu, BFM09 uv from 3DMMasSTN into a whole 3dmm model. Extract this file to /MGCNet/model 2. pretain

This include the pretrail model for the Resnet50 and vgg pretrain model for Facenet. Extract this file to /MGCNet/pretain


  1. data demo:

Extract this file to /MGCNet/data, we can not provide all datas, as it is too large and the license of MPIE dataset not allow me to do this.

  1. data: landmark ground truth

The detection method from, and we use the SFD face detector

  1. data: skin probability

I get this part code from Yu DENG([email protected]), maybe you can ask help from him.


  1. This is used to inference a single unprocessed image(cmd in file). This file can also render the images(geometry, texture, shading,multi-pose), like above or in our paper(read code), which makes visualization and comparison more convenient.

  2. preprocess All the preprocess has been included in '', we show the outline here. (1) face detection and face alignment are package in ./tools/preprocess/detectlandmark,py. (2) face alignment by affine transformation to warp the unprocess image. Test all the images in a folder can follow this preprocess.



Useful tools(keep updating)

  1. face alignment tools
  2. 3D face render tools.
  3. Camera augment for rendering.


If you use this code, please consider citing:

  title={Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency},
  author={Shang, Jiaxiang and Shen, Tianwei and Li, Shiwei and Zhou, Lei and Zhen, Mingmin and Fang, Tian and Quan, Long},
  journal={arXiv preprint arXiv:2007.12494},


Please contact [email protected] or open an issue for any questions or suggestions.


Thanks the help from recent 3D face reconstruction papers Deep3DFaceReconstruction, 3DMMasSTN, PRNet, RingNet, 3DDFA and single depth estimation work DeepMatchVO. I would like to thank Tewari to provide the compared result.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.