Need help with pix2vertex.pytorch?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

eladrich
137 Stars 15 Forks MIT License 46 Commits 1 Opened issues

Description

An official pyTorch port of the pix2vertex paper from ICCV2017

Services available

!
?

Need anything else?

Contributors list

# 280,686
MATLAB
Lua
Jupyter...
iccv
31 commits

Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation - Official PyTorch Implementation

Binder PyPI version License: MIT

[Arxiv] [Video]

Evaluation code for Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation. Finally ported to PyTorch!

Recent Updates

2020.10.27
: Added STL support

2020.05.07
: Added a wheel package!

2020.05.06
: Added myBinder version for quick testing of the model

2020.04.30
: Initial pyTorch release

What's in this release?

The original pix2vertex repo was composed of three parts - A network to perform the image to depth + correspondence maps trained on synthetic facial data - A non-rigid ICP scheme for converting the output maps to a full 3D Mesh
- A shape-from-shading scheme for adding fine mesoscopic details

This repo currently contains our image-to-image network with weights and model to

PyTorch
and a simple
python
postprocessing scheme. - The released network was trained on a combination of synthetic images and unlabeled real images for some extra robustness :)

Installation

Installation from PyPi

bash
    $ pip install pix2vertex
Installation from source
bash
    $ git clone https://github.com/eladrich/pix2vertex.pytorch.git
    $ cd pix2vertex.pytorch
    $ python setup.py install

Usage

The quickest way to try

p2v
is using the
reconstruct
method over an input image, followed by visualization or STL creation. ```python import pix2vertex as p2v from imageio import imread

image = imread() result, crop = p2v.reconstruct(image)

Interactive visualization in a notebook

p2v.visdepthinteractive(result['Z_surface'])

Static visualization using matplotlib

p2v.visdepthmatplotlib(crop, result['Z_surface'])

Export to STL

p2v.save2stl(result['Zsurface'], 'res.stl') ``

For a more complete example see the
reconstructpipeline` notebook. You can give it a try without any installations using our binder port.

Pretrained Model

Models can be downloaded from these links: - pix2vertex model - dlib landmark predictor - note that the dlib model has its own license.

If no model path is specified the package automagically downloads the required models.

TODOs

  • [x] Port Torch model to PyTorch
  • [x] Release an inference notebook (using K3D)
  • [x] Add requirements
  • [x] Pack as wheel
  • [x] Ported to MyBinder
  • [x] Add a simple method to export a stl file for printing
  • [ ] Port the Shape-from-Shading method used in our matlab paper
  • [ ] Write a short blog about the revised training scheme

Citation

If you use this code for your research, please cite our paper Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation:

@article{sela2017unrestricted,
  title={Unrestricted Facial Geometry Reconstruction Using Image-to-Image Translation},
  author={Sela, Matan and Richardson, Elad and Kimmel, Ron},
  journal={arxiv},
  year={2017}
}

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.