A generative model conditioned on shape and appearance.
This repository contains training code for the CVPR 2018 spotlight
The model learns to infer appearance from a single image and can synthesize images with that appearance in different poses.
This is a slightly modified version of the code that was used to produce the results in the paper. The original code was cleaned up, the data dependent weight initialization was made compatible with
tensorflow >= 1.3.0and a unified model between the datasets is used. You can find the original code and checkpoints online (
vunet/runs) but if you want to use them, please keep in mind that:
tensorflow >= 1.3.0. You should use
The code was developed with Python 3. Dependencies can be installed with
pip install -r requirements.txt
These requirements correspond to the dependency versions used to generate the pretrained models but other versions might work as well.
Download and unpack the desired dataset. This results in a folder containing an
index.pfile. Either add a symbolic link named
datapointing to the download directory or adjust the path to the
index.pfile in the
For convenience, you can also run
which will perform the above steps automatically.can be one of
market. To train the model, run
python main.py --config .yaml
By default, images and checkpoints are saved to
log/. To change the log directory and other options, see
python main.py -h
and the corresponding configuration file. To obtain images of optimal quality it is recommended to train for a second round with a loss based on Gram matrices. To do so run
python main.py --config _retrain.yaml --retrain --checkpoint
You can find pretrained models online ().
To be able to train the model on your own dataset you must provide a pickled dictionary with the following keys:
joint_order: list indicating the order of joints.
imgs: list of paths to images (relative to pickle file).
train: list of booleans indicating if this image belongs to training split
joints: list of
[0,1]normalized xy joint coordinates of shape
(len(joint_jorder), 2). Use negative values for occluded joints.
'rankle', 'rknee', 'rhip', 'rshoulder', 'relbow', 'rwrist', 'reye', 'lankle', 'lknee', 'lhip', 'lshoulder', 'lelbow', 'lwrist', 'leye', 'cnose'
and images without valid values for
rhip, rshoulder, lhip, lshoulderare ignored.