A denoising autoencoder + adversarial losses and attention mechanisms for face swapping.
Adding Adversarial loss and perceptual loss (VGGface) to deepfakes'(reddit user) auto-encoder architecture.
| Date | Update |
| ------------- | ------------- |
| 2018-08-27 | Colab support: A colab notebook for faceswap-GAN v2.2 is provided.| | 2018-07-25 | Data preparation: Add a new notebook for video pre-processing in which MTCNN is used for face detection as well as face alignment.| | 2018-06-29 | Model architecture: faceswap-GAN v2.2 now supports different output resolutions: 64x64, 128x128, and 256x256. Default
RESOLUTION = 64can be changed in the config cell of v2.2 notebook.| | 2018-06-25 | New version: faceswap-GAN v2.2 has been released. The main improvements of v2.2 model are its capability of generating realistic and consistent eye movements (results are shown below, or Ctrl+F for eyes), as well as higher video quality with face alignment.| | 2018-06-06 | Model architecture: Add a self-attention mechanism proposed in SAGAN into V2 GAN model. (Note: There is still no official code release for SAGAN, the implementation in this repo. could be wrong. We'll keep an eye on it.)|
Here is a playground notebook for faceswap-GAN v2.2 on Google Colab. Users can train their own model in the browser.
[Update 2019/10/04] There seems to be import errors in the latest Colab environment due to inconsistent version of packages. Please make sure that the Keras and TensorFlow follow the version number shown in the requirement section below.
dlibpackages) can be found in MTCNNvideofacedetectionalignment.ipynb.)
./faces/aligned_facesfor non-aligned/aligned results respectively.
./faces/binary_masks_eyes. These binary masks can serve as a suboptimal alternative to masks generated through prepbinarymasks.ipynb.
Usage 1. Run MTCNNvideofacedetectionalignment.ipynb to extract faces from videos. Manually move/rename the aligned face images into
./faceB/folders. 2. Run prepbinarymasks.ipynb to generate binary masks of training images. - You can skip this pre-processing step by (1) setting
use_bm_eyes=Falsein the config cell of the train_test notebook, or (2) use low-quality binary masks generated in step 1. 3. Run FaceSwapGANv2.2traintest.ipynb to train models. 4. Run FaceSwapGANv2.2videoconversion.ipynb to create videos using the trained models in step 3.
./faceB/folder for each taeget respectively.
Improved output quality: Adversarial loss improves reconstruction quality of generated images.
Additional results: This image shows 160 random results generated by v2 GAN with self-attention mechanism (image format: source -> mask -> transformed).
Evaluations: Evaluations of the output quality on Trump/Cage dataset can be found here.
VGGFace perceptual loss: Perceptual loss improves direction of eyeballs to be more realistic and consistent with input face. It also smoothes out artifacts in the segmentation mask, resulting higher output quality.
Attention mask: Model predicts an attention mask that helps on handling occlusion, eliminating artifacts, and producing natrual skin tone.
Configurable input/output resolution (v2.2): The model supports 64x64, 128x128, and 256x256 outupt resolutions.
Face tracking/alignment using MTCNN and Kalman filter in video conversion:
Eyes-aware training: Introduce high reconstruction loss and edge loss in eyes area, which guides the model to generate realistic eyes.
Code borrows from tjwei, eriklindernoren, fchollet, keras-contrib and reddit user deepfakes' project. The generative network is adopted from CycleGAN. Weights and scripts of MTCNN are from FaceNet. Illustrations are from irasutoya.