Code for reproducing key results in the paper "Improving Variational Inference with Inverse Autoregressive Flow"
Status: Archive (code is provided as-is, no updates expected)
Code for reproducing key results in the paper Improving Variational Inference with Inverse Autoregressive Flow by Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling.
Make sure that recent versions installed of:
pip install numpy)
pip install Theano)
Set
floatX = float32in the
[global]section of Theano config (usually
~/.theanorc). Alternatively you could prepend
THEANO_FLAGS=floatX=float32to the python commands below.
Clone this repository, e.g.:
sh git clone https://github.com/openai/iaf.git
Download the CIFAR-10 dataset (get the Python version) and create an environment variable
CIFAR10_PATHthat points to the subdirectory with CIFAR-10 data. For example:
sh export CIFAR10_PATH="$HOME/cifar-10"
Example:
sh python train.py with problem=cifar10 n_z=32 n_h=64 depths=[2,2,2] margs.depth_ar=1 margs.posterior=down_iaf2_NL margs.kl_min=0.25
problemis the problem (dataset) to train on. I only tested
cifar10for this release.
n_zis the number of stochastic featuremaps in each layer.
n_his the number of deterministic featuremaps used throughout the model.
depthsis an array of integers that denotes the depths of the levels in the model. Each level is a sequence of layers. Each subsequent level operates over spatially smaller featuremaps. In case of CIFAR-10, the first level operates over 16x16 featuremaps, the second over 8x8 featuremaps, etc.
Some possible choices for
margs.posteriorare: -
up_diag: bottom-up factorized Gaussian -
up_iaf1_nl: bottom-up IAF, mean-only perturbation -
up_iaf2_nl: bottom-up IAF -
down_diag: top-down factorized Gaussian -
down_iaf1_nl: top-down IAF, mean-only perturbation -
down_iaf2_nl: top-down IAF
margs.depth_aris the number of hidden layers within IAF, and can be any non-negative integer.
margs.kl_min: the minimum information constraint. Should be a non-negative float (where 0 is no constraint).
(3.28 bits/dim)
python train.py with problem=cifar10 n_h=160 depths=[10,10] margs.depth_ar=2 margs.posterior=down_iaf2_nl margs.prior=diag margs.kl_min=0.25
More instructions will follow.
Make sure that recent versions installed of: - Python (version 2.7 or higher) - TensorFlow - tqdm
CIFAR10_PATHenvironment variable should point to the dataset location.
Training script:
sh python tf_train.py --logdir --hpconfig depth=1,num_blocks=20,kl_min=0.1,learning_rate=0.002,batch_size=32 --num_gpus 8 --mode train
It will run the training procedure on a given number of GPUs. Model checkpoints will be stored in
/traindirectory along with TensorBoard summaries that are useful for monitoring and debugging issues.
Evaluation script:
sh python tf_train.py --logdir --hpconfig depth=1,num_blocks=20,kl_min=0.1,learning_rate=0.002,batch_size=32 --num_gpus 1 --mode eval_test
It will run the evaluation on the test set using a single GPU and will produce TensorBoard summary with the results and generated samples.
To start TensorBoard:
sh tensorboard --logdir
For the description of hyper-parameters, take a look at
get_default_hparamsfunction in
tf_train.py.
The best IAF model trained on CIFAR-10 reached 3.15 bits/dim when evaluated with a single sample. With 10,000 samples, the estimation of log likelihood is 3.111 bits/dim. The checkpoint is available at link. Steps to use it: - download the file - create directory
/train/and copy the checkpoint there - run the following command:
sh python tf_train.py --logdir --hpconfig depth=1,num_blocks=20,kl_min=0.1,learning_rate=0.002,batch_size=32 --num_gpus 1 --mode eval_test
The script will run the evaluation on the test set and generate samples stored in TensorFlow events file that can be accessed using TensorBoard.