Need help with vq-vae-2-pytorch?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

rosinality
743 Stars 166 Forks Other 18 Commits 39 Opened issues

Description

Implementation of Generating Diverse High-Fidelity Images with VQ-VAE-2 in PyTorch

Services available

!
?

Need anything else?

Contributors list

# 64,778
Python
C++
18 commits

vq-vae-2-pytorch

Implementation of Generating Diverse High-Fidelity Images with VQ-VAE-2 in PyTorch

Update

  • 2020-06-01

trainvqvae.py and vqvae.py now supports distributed training. You can use --ngpu [NUMGPUS] arguments for trainvqvae.py to use [NUM_GPUS] during training.

Requisite

  • Python >= 3.6
  • PyTorch >= 1.1
  • lmdb (for storing extracted codes)

Checkpoint of VQ-VAE pretrained on FFHQ

Usage

Currently supports 256px (top/bottom hierarchical prior)

  1. Stage 1 (VQ-VAE)

python train_vqvae.py [DATASET PATH]

If you use FFHQ, I highly recommends to preprocess images. (resize and convert to jpeg)

  1. Extract codes for stage 2 training

python extract_code.py --ckpt checkpoint/[VQ-VAE CHECKPOINT] --name [LMDB NAME] [DATASET PATH]

  1. Stage 2 (PixelSNAIL)

python train_pixelsnail.py [LMDB NAME]

Maybe it is better to use larger PixelSNAIL model. Currently model size is reduced due to GPU constraints.

Sample

Stage 1

Note: This is a training sample

Sample from Stage 1 (VQ-VAE)

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.