Training & Inference Code of PRNet in PyTorch 1.1.0
This is an unofficial pytorch implementation of PRNet since there is not a complete generating and training code of
300WLPdataset.
@date: 2019.11.13
@notice: An important bug has been fixed by mj in loading uv map. The original
uv_map.jpgis flipped, so *.npy is used here to redress this problem. Thanks to mjanddy!
@date: 2019.11.14
@notice: Inference Stage Uploaded, pretrain model available in
results/latest.pth. Thanks to mjanddy!
Since replacing the default
PIL.Imgaeby
cv2.imreadin image reader, you need do a little revise on your
tensorboardpackage in
your_python_path/site-packages/torch/utils/tensorboard/summary.py
What you should do is add
tensor = tensor[:, :, ::-1]before
image = Image.fromarray(tensor)in function
make_image(...). ```shell ... def makeimage(tensor, rescale=1, rois=None): """Convert an numpy representation image to Image protobuf""" from PIL import Image height, width, channel = tensor.shape scaledheight = int(height * rescale) scaled_width = int(width * rescale)
tensor = tensor[:, :, ::-1] image = Image.fromarray(tensor) ...
... ```
Before we start generat uv position map and train it. The first step is generate BFM.mat according to Basel Face Model. For simplicity, The corresponding
BFM.mathas been provided here.
After download it successfully, you need to move
BFM.matto
utils/.
Besides, the essential python packages were listed in
requirements.txt.
YadiraF/face3d have provide scripts for generating uvposmap, here i wrap it for Batch processing.
You can use
utils/generate_posmap_300WLP.pyas:
python3 generate_posmap_300WLP.py --input_dir ./dataset/300WLP/IBUG/ --save_dir ./300WLP_IBUG/
Then
300WLP_IBUGdataset is the proper structure for training PRNet:
- 300WLP_IBUG - 0/ - IBUG_image_xxx.npy - original.jpg (original RGB) - uv_posmap.jpg (corresponding UV Position Map) - 1/ - **...** - 100/
300WLP, I provide processed original--uv_posmap pair of IBUG here.
After finish the above two step, you can train your own PRNet as:
python3 train.py --train_dir ./300WLP_IBUG
You can use tensorboard to visualize the intermediate output in
localhost:6006:
shell tensorboard --logdir=absolute_path_of_prnet_runs/
The following image is used to judge the effectiveness of PRNet to unknown data.
(Original, UVMAPgt, UVMAPpredicted)
You can use following instruction to do your prnet inference. The detail about parameters you can find in
inference.py.
shell python3 inference.py -i input_dir(default is TestImages) -o output_dir(default is TestImages/results) --model model_path(default is results/latest.pth) --gpu 0 (-1 denotes cpu)
If you use this code, please consider citing:
@inProceedings{feng2018prn, title = {Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network}, author = {Yao Feng and Fan Wu and Xiaohu Shao and Yanfeng Wang and Xi Zhou}, booktitle = {ECCV}, year = {2018} }