vq-vae-2-pytorch

by rosinality

rosinality / vq-vae-2-pytorch

Implementation of Generating Diverse High-Fidelity Images with VQ-VAE-2 in PyTorch

577 Stars 131 Forks Last release: Not found Other 17 Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

vq-vae-2-pytorch

Implementation of Generating Diverse High-Fidelity Images with VQ-VAE-2 in PyTorch

Update

  • 2020-06-01

trainvqvae.py and vqvae.py now supports distributed training. You can use --ngpu [NUMGPUS] arguments for trainvqvae.py to use [NUM_GPUS] during training.

Requisite

  • Python >= 3.6
  • PyTorch >= 1.1
  • lmdb (for storing extracted codes)

Checkpoint of VQ-VAE pretrained on FFHQ

Usage

Currently supports 256px (top/bottom hierarchical prior)

  1. Stage 1 (VQ-VAE)

python train_vqvae.py [DATASET PATH]

If you use FFHQ, I highly recommends to preprocess images. (resize and convert to jpeg)

  1. Extract codes for stage 2 training

python extract_code.py --ckpt checkpoint/[VQ-VAE CHECKPOINT] --name [LMDB NAME] [DATASET PATH]

  1. Stage 2 (PixelSNAIL)

python train_pixelsnail.py [LMDB NAME]

Maybe it is better to use larger PixelSNAIL model. Currently model size is reduced due to GPU constraints.

Sample

Stage 1

Note: This is a training sample

Sample from Stage 1 (VQ-VAE)

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.