Need help with FLAME_PyTorch?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

222 Stars 24 Forks MIT License 14 Commits 6 Opened issues


This is a implementation of the 3D FLAME model in PyTorch

Services available


Need anything else?

Contributors list

# 116,462
6 commits
# 46,599
5 commits

FLAME: Articulated Expressive 3D Head Model (PyTorch)

This is an implementation of the FLAME 3D head model in PyTorch.

We also provide Tensorflow FLAME, a Chumpy-based FLAME-fitting repository, and code to convert from Basel Face Model to FLAME.

FLAME is a lightweight and expressive generic head model learned from over 33,000 of accurately aligned 3D scans. FLAME combines a linear identity shape space (trained from head scans of 3800 subjects) with an articulated neck, jaw, and eyeballs, pose-dependent corrective blendshapes, and additional global expression blendshapes. For details please see the scientific publication

Learning a model of facial shape and expression from 4D scans
Tianye Li*, Timo Bolkart*, Michael J. Black, Hao Li, and Javier Romero
ACM Transactions on Graphics (Proc. SIGGRAPH Asia) 2017

and the supplementary video.


The code uses Python 3.7 and it is tested on PyTorch 1.4.

Setup FLAME PyTorch Virtual Environment

python3.7 -m venv /.virtualenvs/FLAME_PyTorch
source /.virtualenvs/FLAME_PyTorch/bin/activate

Clone the project and install requirements

git clone
cd FLAME_PyTorch
pip install -r requirements.txt
mkdir model

Download models

  • Download FLAME model from here. You need to sign up and agree to the model license for access to the model. Copy the downloaded model inside the model folder.
  • Download Landmark embedings from RingNet Project. Copy it inside the model folder.


Loading FLAME and visualising the 3D landmarks on the face

Please note we used the pose dependent conture for the face as introduced by RingNet Project.

Run the following command from the terminal



FLAME is available under Creative Commons Attribution license. By using the model or the code code, you acknowledge that you have read the license terms (, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not use the code.

Referencing FLAME

When using this code in a scientific publication, please cite

  title = {Learning a model of facial shape and expression from {4D} scans},
  author = {Li, Tianye and Bolkart, Timo and Black, Michael. J. and Li, Hao and Romero, Javier},
  journal = {ACM Transactions on Graphics, (Proc. SIGGRAPH Asia)},
  volume = {36},
  number = {6},
  year = {2017},
  url = {}

Additionally if you use the pose dependent dynamic landmarks from this codebase, please cite

title = {Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision},
author = {Sanyal, Soubhik and Bolkart, Timo and Feng, Haiwen and Black, Michael},
booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
month = jun,
year = {2019},
month_numeric = {6}

Supported Projects

FLAME supports several projects such as

FLAME is part of SMPL-X: : A new joint 3D model of the human body, face and hands together


If you have any questions regarding the PyTorch implementation then you can contact us at [email protected] and [email protected]


This repository is build with modifications from SMPLX.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.