Need help with swapping-autoencoder-pytorch?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

rosinality
223 Stars 44 Forks Other 14 Commits 15 Opened issues

Description

Unofficial implementation of Swapping Autoencoder for Deep Image Manipulation (https://arxiv.org/abs/2007.00653) in PyTorch

Services available

!
?

Need anything else?

Contributors list

# 53,674
Python
C++
10 commits
# 17,179
Python
HTML
Shell
pytorch
1 commit

swapping-autoencoder-pytorch

Unofficial implementation of Swapping Autoencoder for Deep Image Manipulation (https://arxiv.org/abs/2007.00653) in PyTorch

Usage

First create lmdb datasets:

python preparedata.py --out LMDBPATH --nworker NWORKER --size SIZE1,SIZE2,SIZE3,... DATASET_PATH

This will convert images to jpeg and pre-resizes it. This implementation does not use progressive growing, but you can create multiple resolution datasets using size arguments with comma separated lists, for the cases that you want to try another resolutions later.

Then you can train model in distributed settings

python -m torch.distributed.launch --nprocpernode=NGPU --masterport=PORT train.py --batch BATCHSIZE LMDBPATH

train.py supports Weights & Biases logging. If you want to use it, add --wandb arguments to the script.

Generate samples

You can test trained model using

generate.py

python generate.py --ckpt [CHECKPOINT PATH] IMG1 IMG2 IMG3 ...

Samples

Generated sample image

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.