Need help with PRNet?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

4.4K Stars 900 Forks MIT License 69 Commits 158 Opened issues


Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network (ECCV 2018)

Services available


Need anything else?

Contributors list

# 7,528
42 commits
# 107,830
1 commit
# 62,671
1 commit
# 53,410
1 commit

Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network

This is an official python implementation of PRN.

PRN is a method to jointly regress dense alignment and 3D face shape in an end-to-end manner. More examples on Multi-PIE and 300VW can be seen in YouTube .

The main features are:

  • End-to-End our method can directly regress the 3D facial structure and dense alignment from a single image bypassing 3DMM fitting.

  • Multi-task By regressing position map, the 3D geometry along with semantic meaning can be obtained. Thus, we can effortlessly complete the tasks of dense alignment, monocular 3D face reconstruction, pose estimation, etc.

  • Faster than real-time The method can run at over 100fps(with GTX 1080) to regress a position map.

  • Robust Tested on facial images in unconstrained conditions. Our method is robust to poses, illuminations and occlusions.


Basics(Evaluated in paper)

  • #### Face Alignment

Dense alignment of both visible and non-visible points(including 68 key points).

And the visibility of points(1 for visible and 0 for non-visible).


  • #### 3D Face Reconstruction

Get the 3D vertices and corresponding colours from a single image. Save the result as mesh data(.obj), which can be opened with Meshlab or Microsoft 3D Builder. Notice that, the texture of non-visible area is distorted due to self-occlusion.


  1. you can choose to output mesh with its original pose(default) or with front view(which means all output meshes are aligned)
  2. obj file can now also written with texture map(with specified texture size), and you can set non-visible texture to 0.


More(To be added)

  • #### 3D Pose Estimation

Rather than only use 68 key points to calculate the camera matrix(easily effected by expression and poses), we use all vertices(more than 40K) to calculate a more accurate pose.

#### pose

  • #### Depth image


  • Texture Editing

    • Data Augmentation/Selfie Editing

    modify special parts of input face, eyes for example:

    pose * Face Swapping

    replace the texture with another, then warp it to original pose and use Poisson editing to blend images.


Getting Started


  • Python 2.7 (numpy, skimage, scipy)

  • TensorFlow >= 1.4


  • dlib (for detecting face. You do not have to install if you can provide bounding box information. )

  • opencv2 (for showing results)

GPU is highly recommended. The run time is ~0.01s with GPU(GeForce GTX 1080) and ~0.2s with CPU(Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz).


  1. Clone the repository
git clone
cd PRNet
  1. Download the PRN trained model at BaiduDrive or GoogleDrive, and put it into

  2. Run the test code.(test AFLW2000 images)

python #Can run only with python and tensorflow
  1. Run with your own images

python -i  -o  --isDlib True


python --help
for more details.
  1. For Texture Editing Apps:

python -i image_path_1 -r image_path_2 -o output_path


python --help
for more details.


The core idea of the paper is:

Using position map to represent face geometry&alignment information, then learning this with an Encoder-Decoder Network.

So, the training steps:

  1. generate position map ground truth.

the example of generating position map of 300W_LP dataset can be seen in generateposmap300WLP

  1. an encoder-decoder network to learn mapping from rgb image to position map.

the weight mask can be found in the folder


What you can custom:

  1. the UV space of position map.

you can change the parameterization method, or change the resolution of UV space.

  1. the backbone of encoder-decoder network

this demo uses residual blocks. VGG, mobile-net are also ok.

  1. the weight mask

you can change the weight to focus more on which part your project need more.

  1. the training data

if you have scanned 3d face, it's better to train PRN with your own data. Before that, you may need use ICP to align your face meshes.


  1. How to speed up?

a. network inference part

you can train a smaller network or use a smaller position map as input.

b. render part

you can refer to c++ version.

c. other parts like detecting face, writing obj

the best way is to rewrite them in c++.

  1. How to improve the precision?

a. geometry precision.

Due to the restriction of training data, the precision of reconstructed face from this demo has little detail. You can train the network with your own detailed data or do post-processing like shape-from-shading to add details.

b. texture precision.

I just added an option to specify the texture size. When the texture size > face size in original image, and render new facial image with texture mapping, there will be little resample error.


  • 2018/7/19 add training part. can specify the resolution of the texture map.
  • 2018/5/10 add texture editing examples(for data augmentation, face swapping)
  • 2018/4/28 add visibility of vertices, output obj file with texture map, depth image
  • 2018/4/26 can output mesh with front view
  • 2018/3/28 add pose estimation
  • 2018/3/12 first release(3d reconstruction and dense alignment)


Code: under MIT license.

Trained model file: please see issue 28, thank Kyle McDonald for his answer.


If you use this code, please consider citing:

  title     = {Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network},
  author    = {Yao Feng and Fan Wu and Xiaohu Shao and Yanfeng Wang and Xi Zhou},
  booktitle = {ECCV},
  year      = {2018}


Please contact [email protected] or open an issue for any questions or suggestions.

Thanks! (●'◡'●)


We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.