Autonomous Vision Conference Paper 2017

Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks

Img01

Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models. We achieve this by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). We show that in the nonparametric limit our method yields an exact maximum-likelihood assignment for the parameters of the generative model, as well as the exact posterior distribution over the latent variables given an observation. Contrary to competing approaches which combine VAEs with GANs, our approach has a clear theoretical justification, retains most advantages of standard Variational Autoencoders and is easy to implement.

Author(s): L. Mescheder and S. Nowozin and A. Geiger
Book Title: Proceedings of the 34th International Conference on Machine Learning
Volume: 70
Year: 2017
Month: August
Day: 6-11
Series: Proceedings of Machine Learning Research
Editors: Doina Precup, Yee Whye Teh
Publisher: PMLR
Project(s):
Bibtex Type: Conference Paper (inproceedings)
Event Name: International Conference on Machine Learning (ICML)
Event Place: International Convention Centre, Sydney, Australia
Electronic Archiving: grant_archive
ISSN: 1938-7228
Links:

BibTex

@inproceedings{Mescheder2017ICML,
  title = {Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks},
  booktitle = {Proceedings of the 34th International Conference on Machine Learning},
  abstract = {Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference models. We achieve this by introducing an auxiliary discriminative network that allows to rephrase the maximum-likelihood-problem as a two-player game, hence establishing a principled connection between VAEs and Generative Adversarial Networks (GANs). We show that in the nonparametric limit our method yields an exact maximum-likelihood assignment for the parameters of the generative model, as well as the exact posterior distribution over the latent variables given an observation. Contrary to competing approaches which combine VAEs with GANs, our approach has a clear theoretical justification, retains most advantages of standard Variational Autoencoders and is easy to implement.},
  volume = {70},
  series = {Proceedings of Machine Learning Research},
  editors = {Doina Precup, Yee Whye Teh},
  publisher = {PMLR},
  month = aug,
  year = {2017},
  slug = {mescheder2017arxiv},
  author = {Mescheder, L. and Nowozin, S. and Geiger, A.},
  month_numeric = {8}
}