by matthewvowels1

matthewvowels1 / Awesome-VAEs

A curated list of awesome work on VAEs, disentanglement, representation learning, and generative mod...

305 Stars 31 Forks Last release: Not found 52 Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:


Awesome work on the VAE, disentanglement, representation learning, and generative models.

I gathered these resources (currently @ 758 papers) as literature for my PhD, and thought it may come in useful for others. This list includes works relevant to various topics relating to VAEs. Sometimes this spills over to topics e.g. adversarial training and GANs, general disentanglement, variational inference, flow-based models and auto-regressive models. Always keen to expand the list - feel free to contribute or email me if I've missed your paper off the list : ]

They are ordered by year (new to old). I provide a link to the paper as well as to the github repo where available.


Targeted VAE: structured inference and targeted learning for causal parameter estimation. Vowels, Camgoz, Bowden

Amortized mixture prior for variational sequence generation. Chien, Tsai

Collective dynamics of repeated inference in variational autoencoder rapidly find cluster structure. Naano, Karakida, Okada

Physics-constrained predictive molecular latent space discovery with graph scattering variational autoencoder. Shervani-Tabar, Zabaras

Hierarchical sparse variational autoencoder for text encoding. Prokhovov, Li, Shareghi, Collier

Discrete memory addressing variational autoencoder for visual concept learning. Min, Su, Zhu, Zhang

Embedding and generation of indoor climbing routes with variational autoencoder. Lo

Semi-supervised deep learning in motor imagery-based brain-computer interfaces with stacked variational autoencoder. Chen, Yu, Gu

A dimensionalty reduction algorithm for mapping tokamak operation regimes using variational autoencoder neural network. Wei, brooks, chandra, levesque

Multi-adversarial variational autoencoder nets for simultaneous image generation and classification. Imran, Terzopoulos

VAE-BRIDGE: variational autoencoder filter for Bayesian ridge imputation of missing data. Pereira, Abreu, Rodrigues

Variational online learning of neural dynamics. Zhao, Park

Improving robustness and generality of NLP models using disentangled representations. Wu, Li, Ao, Meng, Wu, Li

A robust image watermarking approach using cycle variational autoencoder. Wei, Wang, Zhang

RVAE-ABFA: robust anomaly detection for high dimensional data using variational autoencoder. Gao, Shi, Dong, Chen, Mi, Huang, Shi

Variational autoencoding dialogue sub-structures using a novel hierarchical annotation scheme. Tewari, Persiani, Umea

dynamicVAE: decoupling reconstruction error and disentangled representation learning. Shao, Lin, Yang, Yao, Zhao, Abdelzaher

Deep transparent prediction through latent representation analysis. Kollias, Bouas, Vlaxos, Brillakis, Seferis, Kollia et al

Interpretable operational risk classification with semi-supervised variational autoencoder. Fan, Zhang, Yang

Content-collaborative disentanglement representation learning for enhances recommendation. Zhang, Zhu, Caverlee

Optimized k-means clustering algorithm using an intelligent stable-plastic variational autoencoder with self-intrinsic cluster validation mechanism. Gikera, Mambo, Mwaura

Identifying treatment effects under unobserved confounding by causal representation learning. Anonymous

Unsupervised discovery of interpretable latent manipulations in language VAEs . Anonymous

VideoGen: Generative modeling of videos using VQ-VAE and transformers. Anonymous

Goal-conditioned variational autoencoder trajectory primatives with continuous and discrete latent codes. Osa, Ikemoto

Self-supervised disentanglement of modality-specific and shared factors improves multimodal generative models. Daunhawer, Sutter, Marcinkevics, Vogt

Decoupling representation learning from reinforcement learning . Stooke, Lee, Abbeel, Laskin

DCAVN: Cervical cancer prediction and classification using deep convolutional and variational autoencoder network. Khamparia, Gupta, Rodrigues, de Albuquerque

Learning sampling in financial statement audits using vector quantised variational autoencoder neural networks. Schreyer, Sattarov, Gierbl, Reimer, Borth

Multilinear latent conditioning for generating unseen attribute combinations. Georgopoulos, Chrysos, Pantic, Panagakis

Ordinal-content VAE: Isolating ordinal-valued content factors in deep latent variable models. Kim, Pavlovic

Quasi-symplectic Langevin variational autoencoder. Wang, Delingette

Trajectory prediction by using contextual LSTM based variational autoencoder. Cho, Cha

Dynamical variational autoencoders: a comprehensive review. Girin, Leglaive, Bie, Diard, Hueber, Alameda-Pineda

Metrics for exposing the biases of content-style disentanglement. Liu, Thermos, Valvano, Chartsias, O'Neil, Tsaftaris

Speech source separation using variational autoencoder and bandpass filter. Do, Tran, Chau

Variationals in variational autoencoders - a comparative evaluation. Wei, Garcia, El-Sayed, Peterson, Mahmood

Variational information bottleneck for semi-supervised classification. Voloshynovskiy, Taran Kondah, Holotyak, Rezende

Conditional introspective variational autoencoder for image synthesis. Zheng, Cheng, Kang, Yao, Tian

Deep generative models in inversion: a reiew and development of a new approach based on a variational autoencoder. Lopez-Alvis, Laloy, Nguyen, Hermans

Robust vision-based workout analysis using diversified deep latent variable model. Xiong, Berkovsky, Sharan, Liu, Coiera

Variational autoencoders. Fleuret

Disentangling multiple features in video sequences using Gaussian processes in variational autoencoders. Bhagat, Uppal, Yin, Lim

Improved techniques for training score-based generative models. Song, Ermon

Optimal variance control of the score function gradient estimator for importance weighted bounds. Lievin, Dittadi, Christensen, Winther

Rewriting a deep generative model. Bau, Liu, Wang, Zhu, Torralba

SRFlow: learning the super-resolution space with normalizing flow. Lugmayr, Danelljan, Gool, Timofte

Generalized energy based models. Arbe, Zhou, Gretton

Variational autoencoder for anti-cancer drug response prediction. Xie, Dong, Jing, Ren

Unsupervised clustering through Gaussian mixture variational autoencoder with non-reparameterized variational inference and std annealing. Li, Zhao, Chen, Xu, Li, Pei

Toward discriminating and synthesizing motion traces using deep probabilistic generative models. Zhou, Liu, Zhang, Trajcevski

Generating in-between images through learned latent space representation using variational autoencoders. Cristovao, Nakada, Tanimura, Asoh

xGAIL: explainable generative adversarial imitation learning for explainable human decision analysis . Pan, Huang, Li, Zhou, Juo

A survey on generative adversarial networks for imbalance problems in computer vision tasks. Sampath, Maurtua, Martin, Gutierrez

Linear disentangled representations and unsupervised action estimation. Painter, Hare, Prugel-Bennett

Learning interpretable representation for controllable polyphonic music generation. Wang, Wang, Zhang, Xia

Disentangled item representation for recommender systems. Cui, Yu, Wu, Liu, Wang

Joint variational autoencoders for recommendation with implicit feedback. Askari, Szlichta, Salehi-Abari

Transferred discrepancy: quantifying the difference between representations. Feng, Zhai, He, Wang, Dong

What should not be contrastive in contrastive learning. Xiao, Wang, Efros, Darrell

SCAN: learning to classify images without labels. Gansbeke, Vandehende, Georgoulis, Proesmans, Gool

SG-VAE: scene grammar variational autoencoder to generative new indoor scenes. Purkait, Zach, Reid

Unsupervised domain adaptation in the wild via disentangling representation learning. Li, Wan, Wang, Kot

Variational autoencoder for generation of antimicrobial peptides. Dean, Walper

Multimodal deep generative models for trajectory prediction: a conditional variational autoencoder approach. Ivanovic, Leung, Schmerling, Pavone

A conditional variational autoencoder algorithm for reconstructing defect data of magnetic flux leakage. Lu, Wu, Zhang

CRUDS: Counterfactual recourse using disentangled subspaces. Downs, Chu, Yacoby, Doshi-Velez, Pan

Using deep variational autoencoder networks for recognizing geochemical anomalies. Luo, Xiong, Zuo

SeCo: exploring sequence supervision for unsupervised representation learning. Yao, Zhang, Qiu, Pan, Mei

LoCo: local contrastive representation learning. Xiong, Ren, Urtasum

Geometrically enriched latent spaces. Arvanitidis, Hauberg, Scholkopf

PDE-driven spatiotemporal disentanglement. Dona, Franceschi, Lamprier, Gallinari

Dynamics generalization via information bottleneck in deep reinforcement learning. Lu, Lee, Abbeel, Tiomkin

Semi-supervised adversarial variational autoencoder . Zemouri

Improving sample quality by training and sampling from latent energy. Xiao, Yan, Amit

Quantitative understanding of VAE by interpreting ELBO as rate distortion cost of transform coding. Nakagawa, Kato

dMELODIES: a musuc dataset for disentanglement learning. Oati, Gururani, Lerch

Privacy-preserving voice analysis via disentangled representations. Aloufi, Haddadi, Boyle

Approximation based variance reduction for reparameterization gradients. Geffner, Domke

Online variational learning of dirichlet process mixtures of scaled dirichlet distributions. Manouchehri, Nguyen, Koochemeshkian, Bouguila, Fan

A commentary on the unsupervised learning of disentangled representations. Locatello, Bauer, Lucic, Ratsch, Gelly, Scholkopf, Bachem

Learning disentangled representations with latent variation predictability. Zhu, Xu, Tao

TDAE: autoencoder-based automatic feature learning method for the detection of DNS tunnel. Wu, Zhang, Yin

A variational autoencoder mixture model for online behavior recommendation. Nguyen, Cho

Towards nonlinear disentanglement in natural data with temporal sparse coding. Klindt, Schott, Sharma, et al.

Improving generative modelling in VAEs using multimodal prior. Abrol, Sharma, Patra Generative flows with matrix exponential Xiao, Liu

Undirected graphical models as approximate posteriors. Vahdat, Andriyash, Macready

DMRAE: discriminative manifold regularized autoencoder for sparse and robust feature learning. Farajian, Adibi

Variational Bayesian quantization. Yang, Bamler, Mandt

Dispersed exponential family mixture VAEs for interpretable text generation. Shi, Zhou, Miao, Li

Empirical study of the benefits of overparameterization in learning latent variable models. Buhai, Halpern, Kim, Risteksi, Sontag

Relaxed-responsibility hierarchical discrete VAEs. Willetts,Miscouridou, Roberts, Holmes

Deep generative video compression with temporal autoregressive transforms. Yang, Yang, Marino, Yang, Mandt

Learning invariances for interpretability using supervised VAE Nguyen, Martinez

Towards a theoretical understanding of the robustness of variational autoencoders. Camuto, Willetts, Roberts, Holmes, Rainforth

Hierarchical linear disentanglement of data-driven conceptual spaces. Alshaikh, Bouraoui, Schockaert

Distribution augmentation for generative modeling. Jun, Child, Chen, Schulman, Ramesh, Radford, Sutskever

Deep heterogeneous autoencoder for subspace clustering of sequential data. Siddique, Mozhdehi, Medeiros

Self-reflective variational autoencoder. Apostolopoulou, Rosenfeld, Dubrawksi

Disentangled variational autoencoder based multi-label classification with covariance-aware multivariate probit model. Bai, Kong, Gomes

InfoGAN-CR and ModelCentrality: self-supervised model training and selection for disentangling GANs. Lin, Thekumparampil, Fanti, Oh

Reconstruction bottlenecks in object-centric generative models. Engelcke, Jones, Posner

Variational learning of Bayesian neural networks via Bayesian dark knowledge. Shen, Chen, Deng

A look inside the black-box: towards the interpretability of conditioned variational autoencoder for collaborative filtering. Carraro, Polato, Aiolli

Topologically-based variational autoencoder for time series classification. Rivera-Castro, Moustafa, Pilyugina, Burnaev

Modeling and interpreting road geometry from a driver's perspective using variational autoencoders. Wang, Chen, Wijnands, Guo

Do compressed representations generalize better? Hafez-Kolahi, Kasaei, Soleymani-Baghshah

PRI-VAE: principle-of-relevant-information variational autoencoders. Li, Yu, Principe, Li, Wu

Latent variable modelling with hyperbolic nomalizing flows. Bose, Smofsky, Liao, Panangaden, Hamilton.

Object-centric learning with slot attention. Locatello, Weissenborn, Unterthiner, Mahendran, Heigold, Uszkoreit, Dosovitskiy, Kipf

NVAE: A deep hierarchical variational autoencoder. Vahadat, Kautz

Variational inference for sequential data with future likelihood estimates. Kim, Jang, Yang, Kim

Exponential tilting of generative models: improving sample quality by training and sampling from latent energy. Xiao, Yan, Amit

Hierarchical path VAE-GAN: generating diverse videos from a single sample. Gur, Benaim, Wolf

Contrastive code representations learning. Jain, Jain, Zhang, Abbeel, Gonzalez, Stoica

Efficiet learning of generative models via finite-difference score matching. Pang, Xu, Li, Song, Ermon, Zhu

Towards recurrent autoregressive flow models. Mern, Morales, Kochenderfer

Benefiting deep latent variable models via learning the prior and removing latent regularization. Morrow, Chiu

VAEs in the presence of missing data Collier, Nazabal, Williams

Mixture of discrete normalizing flows for variational inference. Kusmierczyk, Klami

Spatial revising variational autoencoder-based feature extraction method for hyperspectral images. Yu, Zhang, Shen

Monitoring of nonlinear processes with multiple operating modes through a novel Gaussian mixture variational autoencoder model. Tang, Peng, Dong, Zhang, Zhao

A new approach for smoking event detection using a variational autoencoder and neural decision forest. Fan, Gao

Isometric Gaussian process latent variable model for dissimilarity data. Jorgensen, Hauberg

VAEM: a deep generative model for heterogeneous mixed type data. Ma, Tschiatschek, Hernandez-Lobato, Turner

Disentangling by subspace diffusion. Pfau, Higgins, Botev, Racaniere

Latent variable modeling with random features. Gundersen, Zhang, Engelhardt

Variational orthogonal features. Burt, Rasmussen, van der Wilk

Scale-space autoencoders for unsupervised anomaly segmentations in brain MRI. Bauer, Wistler, Albarquouni, Navab

Learning from demonstration with weakly supervised disentanglement. Hristov, Ramamoorthy.

A tutorial on VAES: from Bayes' rule to lossless compression. Yu

Density deconvolution with normalizing flows. Dockhorn, Ritchie, Yu, Murray

Rethinking sem-supervised learning in VAEs. Joy, Schmon, Torr, Siddharth, Rainforth

DisARM: An antithetic gradient estimator for binary latent variables. Dong, Mnih, Tucker

On casting importance weighted autoencoder to an EM algorithm to learn deep generative models. Kim, Hwang, Kim

Sparsity enforcement on latent variables for better disentanglement in VAE. Crsitovao, Nakada, Tanimura, Asoh

Isometric autoencoders. Atzmon, Gropp, Lipman

Constraining variational inference with geometric Jensen-Shannon divergence. Deasy, Simidjievski, Lio

Neural decomposition: functional ANOVA with variational autoencoders. Martens, Yau

Variational autoencoder with learned latent structure. Connor, Canal, Rozell

Transfer learning approach for botnet detection based on recurrent variational autoencoder. Kim , Sim, Kim, Wu, Hahm

Anomaly-based intrusion detection from network flow features using variational autoencoder. Zavrak, Iskefiyeli

Longitudinal variational autoencoder. Racmchandran, Tikhonov, Koskinen, Lahdesmaki

Gaussian mixture variational atueoncoder for semi-supervised topic modeling. Zhou, Ban, Zhang, Li, Zhang

Structural autoencoders improve representations for generation and transfer. Leeb, Annadani, Bauer, Scholkopf

High-dimensional similarity search with quantum-assisted variational autoencoder. Gao, Wilson, Vandal, Vinci, Nemani, Rieffel

Robust variational autoencoder for tabular data with beta divergence . Akrami, Aydore, Leahy, Joshi

Evidence-aware inferential text generation with vector quantised variational autoencoder. Guo, Tang, Duan, Yin, Jiang, Zhou

LaRVAE: label replacement VAE for semi-supervised disentanglement learning. Nie, Wang, Patel, Baraniuk

AR-DAE: towards unbiased neural entropy gradient estimation. Lim, Courville, Pal, Huang

Learning latent space energy-based prior model. Pang, Han, Nijkamp, Zhu, Wu

Disentanglement for discriminative visual recognition. Liu

Deep critiquing for VAE-based recommender systems. Luo, Yang, Wu, Sanner

To regularize or not to regularize? The bias variance trade-off in regularized VAEs. Mondal, Asnani, Singla

DisCont: self-supervised visual attribute disentanglement using context vectors. Bhagat, Udandarao, Uppal

Interpretable deep graph generation with node-edge co-disentanglement. Guo, Zhao, Qin, Wu, Shehu, Ye

Output-relevant variational autoencoder for just-in-time soft sensor modeling with missing data. Guo, Bai, Huang

Deep variational autoencoder: an efficient tool for PHM frameworks. Zemouri, Levesque, Amyot, Hudon, Kokoko

Model extraction defence using modified variational autoencoder. Gupta

Variational variance: simple and reliable predictive variance parameterization. Stirn, Knowles

Probabilistic autoencoder. Bohm, Seljak

Deep latent-variable models for natural language understanding and generation. Shen

Generalization via information bottleneck in deep reinforcement learning. Lu, Tiomkin, Abbeel

Optimal configuration of concentrating solar power in multienergy power systems with an improved variational autoencoder. Qi, Hu, Dong, Fan, Dong, Xiao

tvGP-VAE: tensor-variate gaussian process prior variational autoencoder. Campbell, Lio

OC-FakeDect: classifying deepfakes using one-class variational autoencoder. Khalid, Woo

Tuning a variational autoencoder for data accountability problem in the Mars science laboratory ground data system. Lakhmiri, Alimo, Le Digabel

Generate high fidelity images with generative variational autoencoder. Sagar

PuppeteerGAN: arbitrary portrait animation with semantic-aware appearance transformation. Chen, Wang, yuan, Tao

Joint training of variational auto-encoder and latent energy-based model. Han, Nijkamp, Zhou, Pang, Zhu, Wu

Feature-based generative design of mechanisms with a variational autoencoder. Brandt

Denoising diffusion probabilistic models. Ho, Jain, Abbeel

Simple and effective VAE training with calibrated decoders. Rybkin, Daniilidis, Levine

SurVAE Flows: surjections to bridge the gap between VAEs and flows. Nielsen, Jaini, Hoogeboom, Winther, Welling

Mutual information gradient estimation for representation learning. Wen, Zhou, He, Zhou, Xu

Cross-VAE: towards disentangling expression from identity for human faces. Wu, Jia, Xie, Qi, Shi, Tian

CONFIG: controllable neural face image generation. Kowalski, Garbin, Estellers, Baltrusaitis, Johnson, Shotton

Variance constrained autoencoding. Braithwaite, O'Connor, Kleijn

Jigsaw-VAE: towards balancing features in variational autoencoders. Taghanaki, Havaei, Lamb, Sanghi

The usefulness of the deep learning method of variational autoencoder to reduce measurement noise in Glaucomatous visual fields. Asaoka, Murata, Asano, Matsuura, Fujino et al.

methCancer-gen: a DNA methylome dataset generator for user-specified cancer type based on conditional variational autoencoder. Choi, Chae

Deep latent variable model for longitudinal group factor analysis. Qiu, Chinchilli, Lin

Prototypical contrastive learning of unsupervised representations. Li, Zhou, Xiong, Socher, Hoi

A semi-supervised approach for identifying abnormal heart sounds using variational autoencoder. Banerjee, Ghose

Semi-supervised neural chord estimation based on a variational autoencoder with discrete labels and continuous textures of chords. Wu, Carsault, Nakamura, Yoshii

A deeper look at the unsupervised learning of disentangled representations in beta-VAE from the perspective of core object recognition. Sikka

Many-to-many voice conversion using cycle-consistent variational autoencoder with multiple decoders/ Yook, Leem, Lee, Yoo

HyperVAE: a minimum description length variational hyper-encoding network. Nguyen, Tran, Gupta, Rana, Dam, Venkatesh

Disentangling in latent space by harnessing a pretrained generator. Nitzan, Bermano, Li, Cohen-Or

Attention mechanism for human motion prediction. Al-aqel, Khan

Brain lesion detection using a robust variational autoencoder and transfer learning. Akrami, Joshi, Li, Aydore, Leahy

Deep variational autoencoder for modeling functional brain networks and ADHD identification. Qiang, Dong, Sun, Ge, Liu

Dual autoencoders generative adversarial network for imbalanced classification problem. Wu, Cui, Welsch

Pairwise supervised hashing with bernoulli variational auto-encoder and self-control gradient estimator. Dadaneh, Boluki, Yin, Zhou, Qian

S3VAE: self-supervised sequential VAE for representation disentanglement and data generation. Zhu, Min, Kadav, Graf,

VMI-VAE: variational mutual information maximization framework for VAE with discrete and continuous priors. Serdega, Kim

Variational autoencoder with embedded student-t mixture model for authorship attribution. Boenninghoff, Zeiler, Nickel, Kolossa

Deep learning on the 2-dimensional ising model to extract the crossover region with a variational autoencoder. Walker, Tam, Jarrell

Context-dependent token-wise variational autoencoder for topic modeling. Masada
High-fidelity audio generation and representation learning with guided adversarial autoencoder. Haque, Rana, Schuller

Adaptive efficient coding: a variational auto-encoder approach. Aridor, Grechi, Woodford

Noise-to-compression variational autoencoder for efficient end-to-end optimized image coding. Luo, Li, Dai, Xu, Cheng, Li, Xiong

Guided image generation with conditional invertible neural networks. Ardizzone, Luth, Kruse, Rother, Kothe

Vector quantization-based regularization for autoencoders . Wu, Flierl

MHVAE: a human-inspired deep hierarchical generative model for multimodal representations learning. Vasco, Melo, Paiva

NewtonianVAE: proportional control and goal identification from pixels via physical latent spaces. Jaques, Burke, Hospedales

Constrained variational autoencoder for improving EEG based speech recognition systems. Krishna, Tran, Carnahan, Tewfik

Variational mutual information maximization framework for VAE latent codes with continuous and discrete priors. Serdega

Monitoring and prediction of big process data with deep latent variable models and parallel computing. Yang, Ge

Polarized-VAE: proximity based disentangled representation learning for text generation. Balasubramanian, Kobyzev, Bahuleyan, Shapiro, Vechtomova

Discretized bottleneck: posterior-collapse-free sequence-to-sequence learning. Zhao, Yu, Mahapatra, Su, Chen

Remote sensing image captioning via Variational Autoencoder and Reinforcement learning. Shen, Liu, Zhou, Zhao, Liu

Conditioned variational autoencoder for top-N item recommendation Polato, Carraro, Aiolli.

Multi-speaker and multi-domain emotional voice conversion using factorized hierarchical variational autoencoder. Elgaar, Park, Lee

beta-variational autoencoder as an entanglement classifier. Sa, Roditi

Preventing posterior collapse with Levenshtein variational autoencoder. Havrylov, Titov

Multi-decoder RNN autoencoder based on variational Bayes method. Kaji, Watanabe, Kobayashi

Bootstrap latent-predictive representations for multitask reinforcement learning. Guo, Pries, Piot, Grill, Altche, Munoz, Azar

Anomaly detection of time series with smoothness-inducing sequential variational auto-encoder. Li, Yan, Wang, Jin

A batch normalized inference network keeps the KL vanishing away. Zhu, Bi, Liu, Ma, Li, Wu

From symbols to signals: symbolic variational autoencoders. Devaraj, Chowdhury, Jain, Kubricth, Tu, Santa

Unsupervised real image super-resolution via generative variational autoencoder. Liu, Sui, Wang, Li, Cani, Chan

Interpreting rate-distortion of variational autoencoder and using model uncertainty for anomaly detection. Park, Adosoglou, Pardalos

Computational representation of Chinese characters: comparison between Singular Value Decomposition and Variational Autoencoder. Tseng, Hsieh

Curiosity-driver variational autoencoder for deep q network. Han, Zhang, Mao

6GCVAE: gated convolutional variational autoencoder for IPv6 Target Generation. Cui, Gou, Xiong

Text-based malicious domain names detection based on variational autoencoder and supervised learning. Sun, Chong, Ochiai

Mutual information gradient estimation for representation learning. Wen, Zhou, He, Zhou, Xu

CausalVAE: structured causal disentanglement in variational autoencoder. Yang, Liu, Chen, Shen, Hao, Wang

Vroc: Variational autoencoder-aided multi-task rumor classifier based on text. Cheng, Nazarian, Bogdan

On the encoder-decoder incompatibility in variational text modeling and beyond . Wu, Wang, Wang

Esimate the implicit likelihoods of GANs with application to anomaly detection. Ren, Li, Zhou, Li

Emotional response generation using conditional variational autoencoder. Lee, Choi

PatchVAE: learning local latent codes for recognition. Gupta, Singh, Shrivastava

Generating tertiary protein structures via an interpretative variational autoencoder. Guo, Tadepalli, Zhao, Shehu

Attribute-based regularization of VAE latent spaces. Pati, Lerch

Controllable variational autoencoder. Shao, Yao, Sun, Zhang, Liu, Liu, Wang, Abdelzaher

Variational autoencoder-based dimensionality reduction for high-dimensional small-sample data classification. Mahmud, Huang, Fu

Normalizing flows with multi-scale autoreressive priors. Mahajan, Bhattacharyya, Fritz, Schiele, Roth

Adversarial latent autoencoders. Pidhorskyi, Adjeroh, Doretto OPTIMUS: organizing sentences via pre-trained modeling of latent space Li, Gao, Li, Li, Peng, Zhang, Gao

Learning discrete structured representations by adversarially maximizing mutual information. Stratos, Wiseman

AI giving back to statistics? Discovery of the coordinate system of univariate distributions by beta variational autoencoder. Glushkovsky

Towards democratizing music production with AI - design of variational autoencoder-based rhythm generator as a DAW plugin. Tokui

Decomposed adversarial learned inference. Li, Wang, Chen, Gao

Fast NLP Model Pretraining with Vampire - Blog post describing AllenAI work on use of VAEs to pre-train NLP models

A robust speaker clustering method based on discrete tied variational autoencoder. Feng, Wang, Li, Peng, Xiao

mmFall: Fall detection using 4D mmwave radar and variational recurrent autoencoder. Jin, Sengupta, Cao

Variational auto-encoders: not all failures are equal. Berger, Sebag

Fully convolutional variational autoencoder for feature extraction of fire detection system. Hugroho, Susanty, Irawan, Koyimatu, Yunita
Time-varying item feature conditional variational autoencoder for collaborative filtering. Kim

Multi-objective variational autoencoder: an application for smart infrastructure maintenance. Anaissi, Zandavi

Variational autoencoder with optimizing gaussian mixture model priors. Guo, Zhou, Chen, Ying, Zhang, Zhou,

Combining model predictive path integral with Kalman variational autoencoder for robot control from raw images. Kwon, Kaneko, Tsurumine, Sasaki, Motonaka, Miyoshi, Matsubara

Botnet detection using recurrent variational autoencoder. Kim, Sim, Kim, Wu

A flow-based deep latent variable model for speech spectrogram modeling and enhancement Nugraha, Sekiguchi, Yoshii

A variational autoencoder with deep embedding model for generalized zero-shot learning Ma, Hu

Continuous representation of molecules using graph variational autoencoder Tavakoli, Baldi

IntroVNMT: an introspective model for variational neural machine translation Sheng, Xu, Guo, Liu, Zhao, Xu

Epitomic variational graph autoencoder Khan, Kleinsteuber

Variance loss in variational autoencoders Asperti

Dynamic narrowing of VAE bottlenecks using GECO and L0 regularization Boom, Wauthier, Verbelen, Dhoedt

q-VAE for disentangled representation learning and latent dynamical systems Koboyashi

Remaining useful life prediction via a variational autoencoder and a time-window-based sequence neural network Su, Li, Wen

A lower bound for the ELBO of the Bernoulli variational autoencoder Sicks, Korn, Schwaar

VaB-AL: incorporating class imbalance and difficulty with variational Bayes for active learning Choi, Yi, Kim, Choo, Kim, Chang, Gwon, Chang

Inferring personalized and race-specific causal effecs of genomic aberrations on Gleason scores: a deep latent variable model Chen, Edwards, Hicks, Zhang

SCALOR: generative world models with scalable object representations Jiang, Janghorbani, Melo, Ahn

Draft and Edit: Automatic Storytelling Through Multi-Pass Hierarchical Conditional Variational Autoencoder. Yu, Li, Liu, Tang, Zhang, Zhao, Yan

Reverse variational autoencoder for visual attribute manipulation and anomaly detection. Gauerhof, Gu

Bridged variational autoencoders for joint modeling of images and attributes. Yadav, Sarana, Namboodiri, Hegde

Treatment effect estimation with disentangled latent factors. anon

Unbalanced GANS: pre-training the generator of generative adversarial network using variational autoencoder. Ham, Jun, Kim

Regularized autoencoders via relaxed injetive probability flow. Kumar, Poole, Murphy

Out-of-distribution detection with distance guarantee in deep generative models. Zhang, Liu, Chen, Wang, Liu, Li, Wei, Chen

Balancing reconstruction error and Kullback-Leibler divergence in variational autoencoders. Asperti, Trentin

Data augmentation for historical documents via cascade variational auto-encoder. Cao, Kamata

Controlling generative models with continuous factors of variations. Plumerault, Borgne, Hudelot

Towards a controllable disentanglement network. Song, Koyejo, Zhang

Knowledge-induced learning with adaptive sampling variational autoencoders for open set fault diagnostics. Chao, Adey, Fink

NestedVAE: isolating common factors via weak supervision. Vowels, Camgoz, Bowden

Leveraging cross feedback of user and item embeddings for variational autoencoder based collaborative filtering. Jin, Zhao, Du, Liu, Gao, Li, Xu

K-autoencoders deep clustering. Opochinsky, Chazan, Gannot, Goldberger

D2D-TM: a cycle VAE-GAN for multi-domain collaborative filtering. Nguyen, Ishigaki

Disentangling controllable object through video prediction improves visual reinforcement learning. Zhong, Schwing, Peng

A deep adversarial variational autoencoder model for dimensionality reduction in single-cell RNA sequencing analysis. Lin, Mukherjee, Kannan

Context conditional variational autoencoder for predicting multi-path trajectories in mixed traffic. Cheng, Liao, Yang, Sester, Rosenhahn

Optimizing variational graph autoencoder for community detection with dual optimization. Choong, Liu, Murata

Learning flat latent manifolds with VAEs. Chen, Klushyn, Ferroni, Bayer, van der Smagt

Learning discrete distributions by dequantization. Hoogeboom, Cohen, Tomczak

Learning discrete and continuous factors of data via alternating disentanglement. Jeong, Song

Electrocardiogram generation and feature extraction using a variational autoencoder. Kuznetsov, Moskalenko, Zolotykh

CosmoVAE: variational autoencoder for CMB image inpainting. Yi, Guo, Fan, Hamann, Wang

Unsupervised representation disentanglement using cross domain features and adversarial learning in variational autoencoder based voice conversion. Huang, Luo, Hwang, Lo, Peng, Tsao, Wang

On implicit regularization in beta VAEs. Kumar, Poole

Weakly-supervised disentanglement without compromises. Locatello, Poole, Ratsch, Scholkopf, Bachem, Tschannen

An integrated framework based on latent variational autoencoder for providing early warning of at-risk students. Du, Yang, Hung

Variational autoencoder and friends. Zheng

High-fidelity synthesis with disentangled representation. Lee, Kim, Hong, Lee

Neurosymbolic knowledge representation for explainable and trustworthy AI. Malo

Adversarial disentanglement with grouped observations. Nemeth

AE-OT-GAN: Training GANs from data specific latent distribution. An, Guo, Zhang, Qi, Lei, Yau, Gu

AE-OT: a new generative model based on extended semi-discrete optimal transport. An, Guo, Lei, Luo, Yau, Gu

Disentanglement by nonlinear ICA with general incompressible-flow networks (GIN). Sorrenson, Rother, Kothe

Phase transitions for the information bottleneck in representation learning. Wu, Fischer

Bayesian deep learning: a model-based interpretable approach. Matsubara

SPACE: unsupervised object-oriented scene representation via spatial attention and decomposition. Lin, Wu, Peri, Sun, Singh, Deng, Jiang, Ahn

A variational stacked autoencoder with harmony search optimizer for valve train fault diagnosis of diesel engine. Chen, Mao, Zhao, Jiang, Zhang

Evaluating loss compression rates of deep generative models. anon

Progressive learning and disentanglement of hierarchical representations. anon

Learning group structure and disentangled representations of dynamical environments. Quessard, Barrett, Clements

A simple framework for contrastive learning of visual representations. Chen, Kornblith, Norouzi, Hinton

Out-of-distribution detection in multi-label datasets using latent space of beta VAE Sundar, Ramakrishna, Rahiminasab, Easwaran, Dubey

Stochastic virtual battery modeling of uncertain electrical loads using variational autoencoder Chakraborty, Nandanoori, Kundu, Kalsi

A variational autoencoder solution for road traffic forecasting systems: missing data imputation, dimension reduction, model selection and anomaly detection Boquet, Morell, Serrano, Vicario

Detecting adversarial examples in learning-enabled cyber-physical systems using variational autoencoder for regression Cai, Li, Koutsoukos

Variational autoencoders with Riemannian brownian motion priors. Kalatzis, Eklund, Arvanitidis, Hauberg


Unsupervised representation learning in interactive environements. Racah

Representing closed transformation paths in encoded network latent space. Connor, Rozell

Variational diffusion autoencoders with random walk sampling. Li, Lindenbaum, Cheng, Cloninger

Diffusion variational autoencoders. Rey, Menkovski, Portegies

A wrapped normal distribution on hyperbolic space for gradient-based learning. Nagano, Yamaguchi, Fujita, Koyama

Reparameterizing distributions on Lie groups. Falorsi, Haan, Davidson, Forre

Prescribed generative adversarial networks. Dieng, Ruiz, Blei, Titsias

On the dimensionality of embeddings for sparse features and data Naumov

Deep variational autoencoders for breast cancer tissue modeling and synthesis in SFDI Pardo, Lopez-Higuera, Pogue, Conde

Unsupervised anomaly detection of industrial robots using sliding-window convolution variational autoencoder Chen, Liu, Xia, Wang, Lai

Discriminator optimal transport Tanaka

Fine-tuning generative models Khandelwal

Disentangling and learning robust representations with naturual clustering . Antoran, Miguel

Inherent tradeoffs in learning fair representations. Zhao, Gordon

Affine variational autoencoders: an efficient approach for improving generalization and robustness to distribution shift. Bidart, Wong

Learning deep controllable and structured representations for image synthesis, structured prediction and beyond. Yan

Continual unsupervised representation learning . Rao, Visin, Rusu, The, Pascanu, Hadsell

Group-based learning of disentangled representations with generalizability for novel contents. Hosoya

Task-Conditioned variational autoencoders for learning movement primitives. Noseworthy, Paul, Roy, Park, Roy

Multimodal generative models for compositional representation learning. Wu, Goodman

dpVAEs: fixing sample generation for regularized VAEs. Bhalodia, Lee, Elhabian

From variational to deterministic autoencoders. Ghosh, Sajjadi, Vergai, Black, Scholkopf

Learning representations by maximizing mutual information in variational autoencoder. Rezaabad, Vishwanath

Disentangled representation learning with Wasserstein total correlation. Xiao, Wang

Wasserstein dependency measure for representation learning. Ozair, Lynch, Bengio, van den Oord, Levine, Sermanent

GP-VAE: deep probabilistic time series imputation. Fortuin, Baranchuk, Ratsch, Mandt

Likelihood contribution based multi-scale architecture for generative flows. Das, Abbeel, Spanos

Gated Variational Autoencoders: Incorporating weak supervision to encourage disentanglement. Vowels, Camgoz, Bowden

An introduction to variational autoencoders. Kingma, Welling

Adaptive density estimation for generative models Lucas, Shmelkov, Schmid, Alahari, Verbeek

Data efficient mutual information neural estimator Lin, Sur, Nastase, Divakaran, Hasson, Amer

RecVAE: a new variational autoencoder for Top-N recommendations with implicit feedback. Shenbin, Alekseev, Tutubalina, Malykh, Nikolenko

Vibration signal generation using conditional variational autoencoder for class imbalance problem. Ko, Kim, Kong, Lee, Youn

The usual suspects? Reassessing blame for VAE posterior collapse. Dai, Wang, Wipf

What does the free energy principle tell us about the brain? Gershman

Sub-band vector quantized variational autoencoder for spectral envelope quantization. Srikotr, Mano

A variational-sequential graph autoencoder for neural performance prediction. Friede, Lukasik, Stuckenschmidt, Keuper

Explicit disentanglement of appearance and perspective in generative models. Skafte, Hauberg

Disentangled behavioural representations. Dezfouli, Ashtiani, Ghattas, Nock, Dayan, Ong

Learning disentangled representations for robust person re-identification. Eom, Ham

Towards latent space optimality for auto-encoder based generative models. Mondal, Chowdhury, Jayendran, Singla, Asnani, AP

Don't blame the ELBO! A linear VAE perspective on posterior collapse. Lucas, Tucker, Grosse, Norouzi

Bridging the ELBO and MMD. Ucar

Learning disentangled representations for counterfactual regression. Hassanpour, Greiner

Learning disentangled representations for recommendation. Ma, Zhou, Cui, Yang, Zhu

A vector quantized variational autoencoder (VQ-VAE) autoregressive neural F0 model for statistical parametric speech synthesis. Wang, Takaki, Yamagishi, King, Tokuda

Diversity-aware event prediction based on a conditional variational autoencoder with reconstruction. Kiyomaru, Omura, Murawaki, Kawahara, Kurohashi

Learning multimodal representations with factorized deep generative models. Tsai, Liang, Zadeh, Morency, Salakhutdinov

High-dimensional nonlinear profile monitoring based on deep probabilistic autoencoders. Sergin, Yan

Leveraging directed causal discovery to detect latent common causes. Lee, Hart, Richens, Johri

Robust discrimination and generation of faces using compact, disentangled embeddings. Browatzki, Wallraven

Coulomb Autoencoders. Sansone, Ali, Sun

Contrastive learning of structured world models. Kipf, Pol, Welling

No representation without transformation. Giannone, Masci, Osendorfer

Neural density estimation. Papamakarios

Variational autoencoder-based approach for rail defect identification. Wei, Ni

Variational learning with disentanglement-pytorch. Abdi, Abolmaesumi, Fels

PVAE: learning disentangled representations with intrinsic dimension via approximated L0 regularization. Shi, Glocker, Castro

Mixed-curvature variational autoencoders. Skopek, Ganea, Becigneul

Continuous hierarchical representations with poincare variational autoencoders. Mathieu, Le Lan, Maddison, Tomioka

VIREL: A variational inference framework for reinforcement learning. Fellows, Mahajan, Rudner, Whiteson

Disentangling video with independent prediction. Whitney, Fergus

Disentangling state space representations Miladinovic, Gondal, Scholkopf, Buhmann, Bauer

Likelihood conribution based multi-scale architecture for generative flows. Das, Abbeel, Spanos

AlignFlow: cycle consistent learning from multiple domains via normalizing flows Grover, Chute, Shu, Cao, Ermon

IB-GAN: disentangled representation learning with information bottleneck GAN. Jeon, Lee, Kim

Learning hierarchical priors in VAEs. Klushyn, Chen, Kurle, Cseke, van der Smagt

ODE2VAE: Deep generative second order ODEs with Bayesian neural networks. Yildiz, Heinonen, Lahdesmaki

Explicitly disentangling image content from translation and rotation with spatial-VAE. Bepler, Zhong, Kelley, Brignole, Berger

A primal-dual link between GANs and autoencoders. Husain, Nock, Williamson

Exact rate-distortion in autoencoders via echo noise. Brekelmans, Moyer, Galstyan, ver Steeg

Direct optimization through arg max for discrete variational auto-encoder. Lorberbom, Jaakkola, Gane, Hazan

Semi-implicit graph variational auto-encoders. Hasanzadeh, Hajiramezanali, Narayanan, Duffield, Zhou, Qian

The continuous Bernoulli: fixing a pervasive error in variational autoencoders. Loaiza-Ganem, Cunningham

Provable gradient variance guarantees for black-box variational inference. Domke

Conditional structure generation through graph variational generative adversarial nets. Yang, Zhuang, Shi, Luu, Li

Scalable spike source localization in extracellular recordings using amortized variational inference. Hurwitz, Xu, Srivastava, Buccino, Hennig

A latent variational framework for stochastic optimization. Casgrain

MAVEN: multi-agent variational exploration. Mahajan, Rashid, Samvelyan, Whiteson

Variational graph recurrent neural networks. Hajiramezanali, Hasanzadeh, Narayanan, Duffield, Zhou, Qian

The thermodynamic variational objective. Masrani, Le, Wood

Variational temporal abstraction. Kim, Ahn, Bengio

Exploiting video sequences for unsupervised disentangling in generative adversarial networks. Tuesca, Uzal

Couple-VAE: mitigating the encoder-decoder incompatibility in variational text modeling with coupled deterministic networks.

Variational mixture-of-experts autoencoders for multi-modal deep generative models. Shi, Siddharth, Paige, Torr

Invertible convolutional flow. Karami, Schuurmans, Sohl-Dickstein, Dinh, Duckworth

Implicit posterior variational inference for deep Gaussian processes. Yu, Chen, Dai, Low, Jaillet

MaCow: Masked convolutional generative flow. Ma, Kong, Zhang, Hovy

Residual flows for invertible generative modeling. Chen, Behrmann, Duvenaud, Jacobsen

Discrete flows: invertible generative models of discrete data. Tran, Vafa, Agrawal, Dinh, Poole

Re-examination of the role of latent variables in sequence modeling. Lai, Dai, Yang, Yoo

Learning-in-the-loop optimization: end-to-end control and co-design of soft robots through learned deep latent representations. Spielbergs, Zhao, Hu, Du, Matusik, Rus

Triad constraints for learning causal structure of latent variables. Cai, Xie, Glymour, Hao, Zhang

Disentangling influence: using disentangled representations to audit model predictions. Marx, Phillips, Friedler, Scheidegger, Venkatasubramanian

Symmetry-based disentangled representation learning requires interaction with environments. Caselles-Dupre, Ortiz, Filliat

Weakly supervised disentanglement with guarantees. Shu, Chen, Kumar, Ermon, Poole

Demystifying inter-class disentanglement. Gabbay, Hoshen

Spectral regularization for combating mode collapse in GANs. Liu, Tang, Xie, Qiu

Geometric disentanglement for generative latent shape models. Aumentado-Armstrong, Tsogkas, Jepson, Dickinson

Cross-dataset person re-identification via unsupervised pose disentanglement and adaptation. Li, Lin, Lin, Wang

Identity from here, pose from there: self-supervised disentanglement and generation of objects using unlabeled videos. Xiao, Liu, Lee

Content and style disentanglement for artistic style transfer. Kotovenko, Sanakoyeu, Lang, Ommer

Unsupervised robust disentangling of latent characteristics for image synthesis. Esser, Haux, Ommer

LADN: local adversarial disentangling network for facial makeup and de-makeup. Gu, Wang, Chiu, Tai, Tang

Video compression with rate-distortion autoencoders. Habibian, van Rozendaal, Tomczak, Cohen

Variable rate deep image compression with a conditional autoencoder. Choi, El-Khamy, Lee

Memorizing normality to detect anomaly: memory-augmented deep autoencoder for unsupervised anomaly detection. Gong, Liu, Le, Saha

AVT: unsupervise d learning of transformation equivariant representations by autoencoding variational transformations. Qi, Zhang, Chen, Tian

Deep clustering by Gaussian mixture variational autoencoders with graph embedding. Yang, Cheung, Li, Fang

Variational adversarial active learning. Sinha, Ebrahimi, Darrell

Variational few-shot learning. Zhang, Zhao, Ni, Xu, Yang

Multi-angle point cloud-VAE: unsupervised feature learning for 3D point clouds from multiple angles by joint self-reconstruction and half-to-half prediction. Han, Wang, Liu, Zwicker

LayoutVAE: stochastic scene layout generation from a label set. Jyothi, Durand, He, Sigal, Mori

VV-NET: Voxel VAE Net with group convolutions for point cloud segmentation. Meng, Gao, Lai, Manocha

Bayes-Factor-VAE: hierarchical bayesian deep auto-encoder models for factor disentanglement. Kim, Wang, Sahu, Pavlovic

Robust ordinal VAE: Employing noisy pairwise comparisons for disentanglement. Chen, Batmanghelich

Evaluating disentangled representations. Sepliarskaia, A. and Kiseleva, J. and de Rijke, M.

A stable variational autoencoder for text modelling. Li, R. and Li, X. and Lin, C. and Collinson, M. and Mao, R.

Hamiltonian generative networks. Toth, Rezende, Jaegle, Racaniere, Botev, Higgins

LAVAE: Disentangling location and appearance. Dittadi, Winther

Interpretable models in probabilistic machine learning. Kim

Disentangling speech and non-speech components for building robust acoustic models from found data. Gurunath, Rallabandi, Black

Joint separation, dereverberation and classification of multiple sources using multichannel variational autoencoder with auxiliary classifier. Inoue, Kameoka, Li, Makino

SuperVAE: Superpixelwise variational autoencoder for salient object detection. Li, Sun, Guo

Implicit discriminator in variational autoencoder. Munjal, Paul, Krishnan

TransGaGa: Geometry-aware unsupervised image-to-image translation. Wu, Cao, Li, Qian, Loy

Variational attention using articulatory priors for generating code mixed speech using monolingual corpora. Rallabandi, Black.

One-class collaborative filtering with the queryable variational autoencoder. Wu, Bouadjenek, Sanner.

Predictive auxiliary variational autoencoder for representation learning of global speech characteristics. Springenberg, Lakomkin, Weber, Wermter.

Data augmentation using variational autoencoder for embedding based speaker verification. Wu, Wang, Qian, Yu

One-shot voice conversion with disentangled representations by leveraging phonetic posteriograms. Mohammadi, Kim.

EEG-based adaptive driver-vehicle interface using variational autoencoder and PI-TSVM. Bi, Zhang, Lian

Neural gaussian copula for variational autoencoder Wang, Wang

Enhancing VAEs for collaborative filtering: Flexible priors and gating mechanisms. Kim, Suh

Riemannian normalizing flow on variational wasserstein autoencoder for text modeling. Wang, Wang

Disentanglement with hyperspherical latent spaces using diffusion variational autoencoders. Rey

Learning deep representations by mutual information estimation and maximization. Hjelm, Fedorov, Lavoie-Marchildon, Grewal, Bachman, Trischler, Bengio

Novel tracking approach based on fully-unsupervised disentanglement of the geometrical factors of variation. Vladymyrov, Ariga

Real time trajectory prediction using conditional generative models. Gomez-Gonzalez, Prokudin, Scholkopf, Peters

Disentanglement challenge: from regularization to reconstruction. Qiao, Li, Cai

Improved disentanglement through aggregated convolutional feature maps. Seitzer

Linked variational autoencoders for inferring substitutable and supplementary items. Rakesh, Wang, Shu

On the fairness of disentangled representations. Locatello, Abbati, Rainforth, Bauer, Scholkopf, Bachem

Learning robust representations by projecting superficial statistics out. Wang, He, Lipton, Xing

Understanding posterior collapse in generative latent variable models. Lucas, Tucker, Grosse, Norouzi

On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset. Gondal, Wuthrich, Miladinovic, Locatello, Breidt, Volchkv, Akpo, Bachem, Scholkopf, Bauer

DIVA: domain invariant variational autoencoder. Ilse, Tomczak, Louizos, Welling

Comment: Variational Autoencoders as empirical Bayes. Wang, Miller, Blei

Fast MVAE: joint separation and classification of mixed sources based on multichannel variational autoencoder with auxiliary classifier. Li, Kameoka, Makino

Reweighted expectation maximization. Dieng, Paisley

Semisupervised text classification by variational autoencoder. Xu, Tan

Learning deep latent-variable MRFs with amortized Bethe free-energy minimization. Wiseman

Contrastive variational autoencoder enhances salient features. Abid, Zou

Learning latent superstructures in variational autoencoders for deep multidimensional clustering. Li, Chen, Poon, Zhang

Tighter variational bounds are not necessarily better. Rainforth, Kosiorek, Le, Maddison, Igl, Wood, The

ISA-VAE: Independent subspace analysis with variational autoencoders. Anon.

Manifold mixup: better representations by interpolating hidden states. Verma, Lamb, Beckham, Najafi, Mitliagkas, Courville, Lopez-Paz, Bengio.

Bit-swap: recursive bits-back coding for lossless compression with hierarchical latent variables. Kingma, Abbeel, Ho.

Practical lossless compression with latent variables using bits back coding. Townsend, Bird, Barber.

BIVA: a very deep hierarchy of latent variables for generative modeling. Maaloe, Fraccaro, Lievin, Winther.

Flow++: improving flow-based generative models with variational dequantization and architecture design. Ho, Chen, Srinivas, Duan, Abbeel.

Sylvester normalizing flows for variational inference. van den Berg, Hasenclever, Tomczak, Welling.

Unbiased implicit variational inference. Titsias, Ruiz.

Robustly disentangled causal mechanisms: validating deep representations for interventional robustness. Suter, Miladinovic, Scholkopf, Bauer.

Tutorial: Deriving the standard variational autoencoder (VAE) loss function. Odaibo

Learning disentangled representations with reference-based variational autoencoders. Ruiz, Martinez, Binefa, Verbeek.

Disentangling factors of variation using few labels. Locatello, Tschannen, Bauer, Ratsch, Scholkopf, Bachem

Disentangling disentanglement in variational autoencoders Mathieu, Rainforth, Siddharth, The,

LIA: latently invertible autoencoder with adversarial learning Zhu, Zhao, Zhang

Emerging disentanglement in auto-encoder based unsupervised image content transfer. Press, Galanti, Benaim, Wolf

MAE: Mutual posterior-divergence regularization for variational autoencoders Ma, Zhou, Hovy

Overcoming the disentanglement vs reconstruction trade-off via Jacobian supervision. Lezama

Challenging common assumptions in the unsupervised learning of disentangled representations. Locatello, Bauer, Lucic, Ratsch, Gelly, Scholkopf, Bachem

Variational prototyping encoder: one shot learning with prototypical images. Kim, Oh, Lee, Pan, Kweon

Diagnosing and enchanving VAE models (conf and journal paper both available). Dai, Wipf

Disentangling latent hands for image synthesis and pose estimation. Yang, Yao

Rare event detection using disentangled representation learning. Hamaguchi, Sakurada, Nakamura

Disentangling latent space for VAE by label relevant/irrelvant dimensions. Zheng, Sun

Variational autoencoders pursue PCA directions (by accident). Rolinek, Zietlow, Martius

Disentangled Representation learning for 3D face shape. Jiang, Wu, Chen, Zhang

Preventing posterior collapse with delta-VAEs. Razavi, van den Oord, Poole, Vinyals

Gait recognition via disentangled representation learning. Zhang, Tran, Yin, Atoum, Liu, Wan, Wang

Hierarchical disentanglement of discriminative latent features for zero-shot learning. Tong, Wang, Klinkigt, Kobayashi, Nonaka

Generalized zero- and few-shot learning via aligned variational autoencoders. Schonfeld, Ebrahimi, Sinha, Darrell, Akata

Unsupervised part-based disentangling of object shape and appearance. Lorenz, Bereska, Milbich, Ommer

A semi-supervised Deep generative model for human body analysis. de Bem, Ghosh, Ajanthan, Miksik, Siddaharth, Torr

Multi-object representation learning with iterative variational inference. Greff, Kaufman, Kabra, Watters, Burgess, Zoran, Matthey, Botvinick, Lerchner

Generating diverse high-fidelity images with VQ-VAE-2. Razavi, van den Oord, Vinyals

MONet: unsupervised scene decomposition and representation. Burgess, Matthey, Watters, Kabra, Higgins, Botvinick, Lerchner

Structured disentangled representations and Hierarchical disentangled representations. Esmaeili, Wu, Jain, Bozkurt, Siddarth, Paige, Brooks, Dy, van de Meent

Spatial Broadcast Decoder: A simple architecture for learning disentangled representations in VAEs. Watters, Matthey, Burgess, Lerchner

Resampled priors for variational autoencoders. Bauer, Mnih

Weakly supervised disentanglement by pairwise similiarities. Chen, Batmanghelich

Deep variational information bottleneck. Aelmi, Fischer, Dillon, Murphy

Generalized variational inference. Knoblauch, Jewson, Damoulas

Variational autoencoders and nonlinear ICA: a unifying framework. Khemakhem, Kingma

Lagging inference networks and posterior collapse in variational autoencoders. He, Spokoyny, Neubig, Berg-Kirkpatrick

Avoiding latent variable collapse with generative skip models. Dieng, Kim, Rush, Blei

Distribution Matching in Variational inference. Rosca, Lakshminarayana, Mohamed
A variational auto-encoder model for stochastic point process. Mehrasa, Jyothi, Durand, He, Sigal, Mori

Sliced-Wasserstein auto-encoders. Kolouri, Pope, Martin, Rohde

A deep generative model for graph layout. Kwon, Ma

Differentiable perturb-and-parse semi-supervised parsing with a structured variational autoencoder. Corro, Titov

Variational autoencoders with jointly optimized latent dependency structure. He, Gong, Marino, Mori, Lehrmann

Unsupervised learning of spatiotemporally coherent metrics Goroshin, Bruna, Tompson, Eigen, LeCun

Temporal difference variational auto-encoder. Gregor, Papamakarios, Besse, Buesing, Weber

Representation learning with contrastive predictive coding. van den Oord, Li, Vinyals

Representation disentanglement for multi-task learning with application to fetal ultrasound Meng, Pawlowski, Rueckert, Kainz

M$2$VAE - derivation of a multi-modal variational autoencoder objective from the marginal joint log-likelihood. Korthals

Predicting visual memory schemas with variational autoencoders. Kyle-Davidson, Bors, Evans

T-CVAE: Transformer -based conditioned variational autoencoder for story completion. Wang, Wan

PuVAE: A variational autoencoder to purify adversarial examples. Hwang, Park, Jang, Yoon, Cho

Coupled VAE: Improved accuracy and robustness of a variational autoencoder. Cao, Li, Nelson

D-VAE: A variational autoencoder for directed acyclic graphs. Zhang, Jiang, Cui, Garnett, Chen

Are disentangled representations helpful for abstract reasoning? van Steenkiste, Locatello, Schmidhuber, Bachem

A heuristic for unsupervised model selection for variational disentangled representation learning. Duan, Watters, Matthey, Burgess, Lerchner, Higgins

Dual space learning with variational autoencoders. Okamoto, Suzuki, Higuchi, Ohsawa, Matsuo

Variational autoencoders for sparse and overdispersed discrete data. Zhao, Rai, Du, Buntine

Variational auto-decoder. Zadeh, Lim, Liang, Morency.

Causal discovery with attention-based convolutional neural networks. Naura, Bucur, Seifert

Variational laplace autoencoders. Park, Kim, Kim

Variational autoencoders with normalizing flow decoders.

Gaussian process priors for view-aware inference. Hou, Heljakka, Solin

SGVAE: sequential graph variational autoencoder. Jing, Chi, Tang

improving multimodal generative models with disentangled latent partitions. Daunhawer, Sutter, Vogt

Cross-population variational autoencoders. Davison, Severson, Ghosh

Evidential disambiguation of latent multimodality in conditional variational autoencoders. Itkina, Ivanovic, Senanayake, Kochenderfer, Pavone

Increasing the generalisation capacity of conditional VAEs. Klushyn, Chen, Cseke, Bayer, van der Smagt

Multi-source neural variational inference. Kurle, Gunnemann, van der Smagt

Early integration for movement modeling in latent spaces. Hornung, Chen, van der Smagt

Building face recognition system with triplet-based stacked variational denoising autoencoder. LEe, Hart, Richens, Johri

Cross-domain variational autoencoder for recommender systems. Shi, Wang
Predictive coding, variational autoencoders, and biological connections. Marino

A general and adaptive robust loss function Barron

Variational autoencoder trajectory primitives and discrete latent. Osa, Ikemoto

Faster attend-infer-repeat with tractable probabilistic models. Stelzner, Peharz, Kersting https://github/stelzner/supair

Learning predictive models from observation and interaction. Schmeckpeper, Xie, Rybkin, Tian, Daniilidis, Levine, Finn

Translating visual art into music Muller-Eberstein, van Noord

Non-parallel voice conversion with controllable speaker individuality using variational autoencoder. Ho, Akagi

Derivation of the variational Bayes equations. Maren


Conditional neural processes. Garnelo, Rosenbaum, Maddison, Ramalho, Saxton, Shanahan, The, Rezende, Eslami

The variational homoencoder: learning to learn high capacity generative models from few examples. Hewitt, Nye, Gane, Jaakkola, Tenebaum

Wasserstein variational inference. Ambrogioni, Guclu, Gucluturk, Hinne, Maris, van Gerven

The dreaming variational autoencoder for reinforcement learning environments Andersen, Goodwin, Granmo

DVAE++: Discrete variational autoencoders wth overlapping transformations. Vahdat, Macready, Bian,Khoshaman, Andriyash

FFJORD: free-form continuous dynamics for scalable reversible generative models. Grathwohl, Chen, Bettencourt, Sutskever, Duvenaud

A general method for amortizing variational filtering. Marino, Cvitkovic, Yue

Handling incomplete heterogeneous data using VAEs. Nazabal, Olmos, Ghahramani, Valera

Sequential attend, infer, repeat: generative modeling of moving objects. Kosiorek, Kim, Posner, Teh

Doubly reparameterized gradient estimators for monte carlo objectives. Tucker, Lawson, Gu, Maddison

Interpretable intuitive physics model. Ye, Wang, Davidson, Gupta

Normalizing Flows Tutorial, Part 2: Modern Normalizing Flows. Eric Jang

Neural autoregressive flows. Huang, Krueger, Lacoste, Courville

Gaussian process prior variational autoencoders. Casale, Dalca, Sagletti, Listgarten, Fusi

ACVAE-VC: non-parallel many-to-many voice conversion with auxiliary classifier variational autoencoder. Kameoka, Kaneko, Tanaka, Hojo

Discovering interpretable representations for both deep generative and discriminative models. Adel, Ghahramani, Weller

Autoregressive quantile networks for generative modelling . Ostrovski, Dabey, Munos

Probabilistic video generation using holistic attribute control. He, Lehrmann, Marino, Mori, Sigal

Bias and generalization in deep generative models: an empirical study. Zhao, Ren, Yuan, Song, Goodman, Ermon

On variational lower bounds of mutual information. Poole, Ozair, van den Oord, Alemi, Tucker

GAN - why it is so hard to train generative adversarial networks . Hui

Counterfactuals uncover the modular structure of deep generative models. Besserve, Sun, Scholkopf.

Learning independent causal mechanisms. Parascandolo, Kilbertus, Rojas-Carulla, Scholkopf

Emergence of invariance and disentanglement in deep representations. Achille, Soatto

Variational memory encoder-decoder. Le, Tran, Nguyen, Venkatesh

Variational autoencoders for collaborative filtering. Liang, Krishnan, Hoffman, Jebara

Invariant representations without adversarial training. Moyer, Gao, Brekelmans, Steeg, Galstyan

Density estimation: Variational autoencoders. Rui Shu

TherML: The thermodynamics of machine learning. Alemi, Fishcer

Leveraging the exact likelihood of deep latent variable models. Mattei, Frellsen

What is wrong with VAEs? Kosiorek

Stochastic variational video prediction. Babaeizadeh, Finn, Erhan, Campbell, Levine

Variational attention for sequence-to-sequence models. Bahuleyan, Mou, Vechtomova, Poupart

FactorVAE Disentangling by factorizing. Kim, Minh

Disentangling factors of variation with cycle-consistent variational autoencoders. Jha, Anand, Singh, Veeravasarapu

Isolating sources of disentanglement in VAEs. Chen, Li, Grosse, Duvenaud

VAE with a VampPrior. Tomczak, Welling

A Framework for the quantitative evaluation of disentangled representations. Eastwood, Williams

Recent advances in autoencoder based representation learning. Tschannen, Bachem, Lucic

InfoVAE: Balancing learning and inference in variational autoencoders. Zhao, Song, Ermon

Understanding disentangling in Beta-VAE. Burgess, Higgins, Pal, Matthey, Watters, Desjardins, Lerchner

Hidden Talents of the Variational autoencoder. Dai, Wang, Aston, Hua, Wipf

Variational Inference of disentangled latent concepts from unlabeled observations. Kumar, Sattigeri, Balakrishnan

Self-supervised learning of a facial attribute embedding from video. Wiles, Koepke, Zisserman

Wasserstein auto-encoders. Tolstikhin, Bousquet, Gelly, Scholkopf

A two-step disentanglement. method Hadad, Wolf, Shahar

Taming VAEs. Rezende, Viola

IntroVAE Introspective variational autoencoders for photographic image synthesis. Huang, Li, He, Sun, Tan

Information constraints on auto-encoding variational bayes. Lopez, Regier, Jordan, Yosef

Learning disentangled joint continuous and discrete representations. Dupont

Neural discrete representation learning. van den Oord, Vinyals, Kavukcuoglu

Disentangled sequential autoencoder. Li, Mandt

Variational Inference: A review for statisticians. Blei, Kucukelbir, McAuliffe
Advances in Variational Inferece. Zhang, Kjellstrom

Auto-encoding total correlation explanation. Goa, Brekelmans, Steeg, Galstyan Closest:

Fixing a broken ELBO. Alemi, Poole, Fischer, Dillon, Saurous, Murphy

The information autoencoding family: a lagrangian perspective on latent variable generative models. Zhao, Song, Ermon

Debiasing evidence approximations: on importance-weighted autoencoders and jackknife variational inference. Nowozin

Unsupervised discrete sentence representation learning for interpretable neural dialog generation. Zhao, Lee, Eskenazi

Dual swap disentangling. Feng, Wang, Ke, Zeng, Tao, Song

Multimodal generative models for scalable weakly-supervised learning. Wu, Goodman

Do deep generative models know what they don't know? Nalisnick, Matsukawa, The, Gorur, Lakshminarayanan

Glow: generative flow with invertible 1x1 convolutions. Kingma, Dhariwal

Inference suboptimality in variational autoencoders. Cremer, Li, Duvenaud

Adversarial Variational Bayes: unifying variational autoencoders and generative adversarial networks. Mescheder, Mowozin, Geiger

Semi-amortized variational autoencoders. Kim, Wiseman, Miller, Sontag, Rush

Spherical Latent Spaces for stable variational autoencoders. Xu, Durrett

Hyperspherical variational auto-encoders. Davidson, Falorsi, De Cao, Kipf, Tomczak

Fader networks: manipulating images by sliding attributes. Lample, Zeghidour, Usunier, Bordes, Denoyer, Ranzato

Training VAEs under structured residuals. Dorta, Vicente, Agapito, Campbell, Prince, Simpson

oi-VAE: output interpretable VAEs for nonlinear group factor analysis. Ainsworth, Foti, Lee, Fox

infoCatVAE: representation learning with categorical variational autoencoders. Lelarge, Pineau

Iterative Amortized inference. Marino, Yue, Mandt

On unifying Deep Generative Models. Hu, Yang, Salakhutdinov, Xing

Diverse Image-to-image translation via disentangled representations. Lee, Tseng, Huang, Singh, Yang

PIONEER networks: progressively growing generative autoencoder. Heljakka, Solin, Kannala

Towards a definition of disentangled representations. Higgins, Amos, Pfau, Racaniere, Matthey, Rezende, Lerchner

Life-long disentangled representation learning with cross-domain latent homologies. Achille, Eccles, Matthey, Burgess, Watters, Lerchner, Higgins file:///Users/matthewvowels/Downloads/Life-LongDisentangledRepresentationLearningwit.pdf

Learning deep disentangled embeddings with F-statistic loss. Ridgeway, Mozer

Learning latent subspaces in variational autoencoders. Klys, Snell, Zemel

On the latent space of Wasserstein auto-encoders. Rubenstein, Scholkopf, Tolstikhin.

Learning disentangled representations with Wasserstein auto-encoders. Rubenstein, Scholkopf, Tolstikhin

The mutual autoencoder: controlling information in latent code representations. Phuong, Kushman, Nowozin, Tomioka, Welling

Auxiliary guided autoregressive variational autoencoders. Lucas, Verkbeek

Interventional robustness of deep latent variable models. Suter, Miladinovic, Bauer, Scholkopf

Understanding degeneracies and ambiguities in attribute transfer. Szabo, Hu, Portenier, Zwicker, Facaro
DNA-GAN: learning disentangled representations from multi-attribute images. Xiao, Hong, Ma

Normalizing flows. Kosiorek

Hamiltonian variational auto-encoder Caterini, Doucet, Sejdinovic

Causal generative neural networks. Goudet, Kalainathan, Caillou, Guyon, Lopez-Paz, Sebag.

Flow-GAN: Combining maximum likelihood and adversarial learning in generative models. Grover, Dhar, Ermon

Linked causal variational autoencoder for inferring paired spillover effects. Rakesh, Guo, Moraffah, Agarwal, Liu

Unsupervised anomaly detection via variational auto-encoder for seasonal KPIs in web applications. Xu, Chen, Zhao, Li, Bu, Li, Liu, Zhao, Pei, Feng, Chen, Wang, Qiao

Mutual information neural estimation. Belghazi, Baratin, Rajeswar, Ozair, Bengio, Hjelm.

Explorations in homeomorphic variational auto-encoding. Falorsi, de Haan, Davidson, Cao, Weiler, Forre, Cohen.

Hierarchical variational memory network for dialogue generation. Chen, Ren, Tang, Zhao, Yin

World models. Ha, Schmidhuber


Towards a neural statistician. Edwards, Storkey

The concrete distribution: a continuous relaxation of discrete random variables. Maddison, Mnih, The

Categorical reparameterization with Gumbel-Softmax. Jang, Gu, Poole

Opening the black box of deep neural networks via information. Schwartz-Ziv, Tishby

Discovering causal signals in images . Lopez-Paz, Nishihara, Chintala, Scholkopf, Bottou

Autoencoding variational inference for topic models. Srivastava, Sutton

Hidden Markov model variational autoencoder for acoustic unit discovery. Ebbers, Heymann, Drude, Glarner, Haeb-Umbach, Raj

Application of variational autoencoders for aircraft turbomachinery design. Zalger

Semi-supervised learning with variational autoencoders. Keng

Causal effect inference with deep latent variable models. Louizos, Shalit, Mooij, Sontag, Zemel, Welling

beta-VAE: learning basic visual concepts with a constrained variational framework. Higgins, Matthey, Pal, Burgess, Glorot, Botvinick, Mohamed, Lerchner

Challenges in disentangling independent factors of variation. Szabo, Hu, Portenier, Facaro, Zwicker

Composing graphical models with neural networks for structured representations and fast inference. Johnson, Duvenaud, Wiltschko, Datta, Adams

Split-brain autoencoders: unsupervised learning by cross-channel prediction. Zhang, Isola, Efros

Learning disentangled representations with semi-supervised deep generative models.Siddharth, Paige, van de Meent, Desmaison, Goodman, Kohli, Wood, Torr

Learning hierarchical features from generative models. Zhao, Song, Ermon

Multi-level variational autoencoder: learning disentangled representations from grouped observations. Bouchacourt, Tomioka, Nowozin

Neural Face editing with intrinsic image disentangling. Shu, Yumer, Hadap, Sankavalli, Shechtman, Samaras

Variational Lossy Autoencoder. Chen, Kingma, Salimans, Duan, Dhariwal, Schulman, Sutskever, Abbeel

Unsupervised learning of disentangled and interpretable representations from sequential data. Hsu, Zhang, Glass

Factorized variational autoencoder for modeling audience reactions to movies. Deng, Navarathna, Carr, Mandt, Yue, Matthews, Mori

Learning latent representations for speech generation and transformation. Hsu, Zhang, Glass

Unsupervised learning of disentangled representations from video. Denton, Birodkar

Laplacian pyramid of conditional variational autoencoders. Dorta, Vicente, Agapito, Campbell, Prince, Simpson

Neural Photo Editing with Inrospective Adverarial Networks. Brock, Lim, Ritchie, Weston

Discrete Variational Autoencoder. Rolfe

Reinterpreting importance-weighted autoencoders. Cremer, Morris, Duvenaud

Density Estimation using realNVP. Dinh, Sohl-Dickstein, Bengio

JADE: Joint autoencoders for disentanglement. Banijamali, Karimi, Wong, Ghosi
Joint Multimodal learning with deep generative models. Suzuki, Nakayama, Matsuo

Towards a deeper understanding of variational autoencoding models. Zhao, Song, Ermon

Lagging inference networks and posterior collapse in variational autoencoders. Dilokthanakul, Mediano, Garnelo, Lee, Salimbeni, Arulkumaran, Shanahan

On the challenges of learning with inference networks on sparse, high-dimensional data. Krishnan, Liang, Hoffman

Stick-breaking Variational Autoencoder.

Deep variational canonical correlation analysis. Wang, Yan, Lee, Livescu

Nonparametric variational auto-encoders for hierarchical representation learning. Goyal, Hu, Liang, Wang, Xing

PixelSNAIL: An improved autoregressive generative model. Chen, Mishra, Rohaninejad, Abbeel

Improved Variational Inference with inverse autoregressive flows. Kingma, Salimans, Jozefowicz, Chen, Sutskever, Welling

It takes (only) two: adversarial generator-encoder networks. Ulyanov, Vedaldi, Lempitsky

Symmetric Variational Autoencoder and connections to adversarial learning. Chen, Dai, Pu, Li, Su, Carin

Reconstruction-based disentanglement for pose-invariant face recognition. Peng, Yu, Sohn, Metaxas, Chandraker

Is maximum likelihood useful for representation learning? Huszár

Disentangled representation learning GAN for pose-invariant face recognition. Tran, Yin, Liu

Improved Variational Autoencoders for text modeling using dilated convolutions. Yang, Hu, Salakhutdinov, Berg-kirkpatrick

Improving variational auto-encoders using householder flow. Tomczak, Welling

Sticking the landing: simple, lower-variance gradient estimators for variational inference. Roeder, Wu, Duvenaud.

VEEGAN: Reducing mode collapse in GANs using implicit variational learning. Srivastava, Valkov, Russell, Gutmann.

Discovering discrete latent topics with neural variational inference. Miao, Grefenstette, Blunsom

Variational approaches for auto-encoding generative adversarial networks. Rosca, Lakshminarayana, Warde-Farley, Mohamed

Variational Autoencoder and extensions. Courville

A neural representation of sketch drawings. Ha, Eck


One-shot generalization in deep generative models. Rezende, Danihelka, Gregor, Wierstra

Attend, infer, repeat: fast scene understanding with generative models. Eslami, Heess, Weber, Tassa, Szepesvari, Kavukcuoglu, Hinton

Deep feature consistent variational autoencoder. Hou, Shen, Sun, Qiu

Neural variational inference for text processing. Miao, Yu, Grefenstette, Blunsom.

Domain-adversarial training of neural networks. Ganin, Ustinova, Ajakan, Germain, Larochelle, Laviolette, Marchand, Lempitsky

Tutorial on Variational Autoencoders. Doersch

How to train deep variational autoencoders and probabilistic ladder networks. Sonderby, Raiko, Maaloe, Sonderby, Winther

ELBO surgery: yet another way to carve up the variational evidence lower bound. Hoffman, Johnson

Variational inference with normalizing flows. Rezende, Mohamed

The Variational Fair Autoencoder. Louizos, Swersky, Li, Welling, Zemel

Information dropout: learning optimal representations through noisy computations. Achille, Soatto

Domain separation networks. Bousmalis, Trigeorgis, Silberman, Krishnan, Erhan

Disentangling factors of variation in deep representations using adversarial training. Mathieu, Zhao, Sprechmann, Ramesh, LeCunn

Variational autoencoder for semi-supervised text classification. Xu, Sun, Deng, Tan related:

Learning what and where to draw. Reed, Sohn, Zhang, Lee

Attribute2Image: Conditional image generation from visual attributes. Yan, Yang, Sohn, Lee

Variational inference with normalizing flows. Rezende, Mohamed

Wild Variational Approximations. Li, Liu

Importance Weighted Autoencoders. Burda, Grosse, Salakhutdinov

Stacked What-Where Auto-encoders. Zhao, Mathieu, Goroshin, LeCunn

Disentangling nonlinear perceptual embeddings with multi-query triplet networks. Veit, Belongie, Karaletsos

Ladder Variational Autoencoders. Sonderby, Raiko, Maaloe, Sonderby, Winther
Variational autoencoder for deep learning of images, labels and captions. Pu, Gan Henao, Yuan, Li, Stevens, Carin

Approximate inference for deep latent Gaussian mixtures. Nalisnick, Hertel, Smyth

Auxiliary Deep Generative Models. Maaloe, Sonderby, Sonderby, Winther

Variational methods for conditional multimodal deep learning. Pandey, Dukkipati

PixelVAE: a latent variable model for natural images. Gulrajani, Kumar, Ahmed, Taiga, Visin, Vazquez, Courville

Adversarial autoencoders. Makhzani, Shlens, Jaitly, Goodfellow, Frey

A hierarchical latent variable encoder-decoder model for generating dialogues. Serban, Sordoni, Lowe, Charlin, Pineau, Courville, Bengio

Infinite variational autoencoder for semi-supervised learning. Abbasnejad, Dick

f-GAN: Training generative neural samplers using variational divergence minimization. Nowozin, Cseke

DISCO Nets: DISsimilarity Coefficient networks Bouchacourt, Kumar, Nowozin

Information dropout: learning optimal representations through noisy computations. Achille, Soatto

Weakly-supervised disentangling with recurrent transformations for 3D view synthesis. Yang, Reed, Yang, Lee

Autoencoding beyond pixels using a learned similarity metric. Boesen, Larsen, Sonderby, Larochelle, Winther

Generating images with perceptual similarity metrics based on deep networks Dosovitskiy, Brox.

A note on the evaluation of generative models. Theis, van den Oord, Bethge.

InfoGAN: interpretable representation learning by information maximizing generative adversarial nets. Chen, Duan, Houthooft, Schulman, Sutskever, Abbeel

Disentangled representations in neural models. Whitney

A recurrent latent variable model for sequential data. Chung, Kastner, Dinh, Goel, Courville, Bengio

Unsupervised learning of 3D structure from images. Rezende, Eslami, Mohamed, Battaglia, Jaderberg, Heess

A survey of inductive biases for factorial representation-learning. Ridgeway

Short notes on variational bounds with rescaled terms. Rezende


Deep learning and the information bottleneck principle Tishby, Zaslavsky

Training generative neural networks via Maximum Mean Discrepancy optimization. Dziugaite, Roy, Ghahramani

NICE: non-linear independent components estimation. Dinh, Krueger, Bengio

Deep convolutional inverse graphics network. Kulkarni, Whitney, Kohli, Tenenbaum

Learning structured output representation using deep conditional generative models. Sohn, Yan, Lee

Latent variable model with diversity-inducing mutual angular regularization. Xie, Deng, Xing

DRAW: a recurrent neural network for image generation. Gregor, Danihelka, Graves, Rezende, Wierstra.

Variational Inference II. Xing, Zheng, Hu, Deng


Auto-encoding variational Bayes. Kingma, Welling

Learning to disentangle factors of variation with manifold interaction. Reed, Sohn, Zhang, Lee

Semi-supervised learning with deep generative models. Kingma, Rezende, Mohamed, Welling

Stochastic backpropagation and approximate inference in deep generative models. Rezende, Mohamed, Wierstra

Representation learning: a review and new perspectives. Bengio, Courville, Vincent


Transforming Auto-encoders. Hinton, Krizhevsky, Wang


Graphical models, exponential families, and variational inference. Wainwright, Jordan et al


Variational learning and bits-back coding: an information-theoretic view to Bayesian learning. Honkela, Valpola


The information bottleneck method. Tishby, Pereira, Bialek

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.