Empirical Inference Conference Paper 2009

Convex variational Bayesian inference for large scale generalized linear models

We show how variational Bayesian inference can be implemented for very large generalized linear models. Our relaxation is proven to be a convex problem for any log-concave model. We provide a generic double loop algorithm for solving this relaxation on models with arbitrary super-Gaussian potentials. By iteratively decoupling the criterion, most of the work can be done by solving large linear systems, rendering our algorithm orders of magnitude faster than previously proposed solvers for the same problem. We evaluate our method on problems of Bayesian active learning for large binary classification models, and show how to address settings with many candidates and sequential inclusion steps.

Author(s): Nickisch, H. and Seeger, MW.
Book Title: ICML 2009
Journal: Proceedings of the 26th International Conference on Machine Learning (ICML 2009)
Pages: 761-768
Year: 2009
Month: June
Day: 0
Editors: Danyluk, A. , L. Bottou, M. Littman
Publisher: ACM Press
Bibtex Type: Conference Paper (inproceedings)
Address: New York, NY, USA
DOI: 10.1145/1553374.1553472
Event Name: 26th International Conference on Machine Learning
Event Place: Montreal, Canada
Digital: 0
Electronic Archiving: grant_archive
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik
Links:

BibTex

@inproceedings{5864,
  title = {Convex variational Bayesian inference for large scale generalized linear models},
  journal = {Proceedings of the 26th International Conference on Machine Learning (ICML 2009)},
  booktitle = {ICML 2009},
  abstract = {We show how variational Bayesian inference can be implemented for very large generalized linear models. Our relaxation is proven to be a convex problem for any log-concave model. We provide a generic double loop algorithm for solving this relaxation on models with arbitrary super-Gaussian potentials. By iteratively decoupling the criterion, most of the work can be done by solving large linear systems, rendering our algorithm orders of magnitude faster than previously proposed solvers for the same problem. We evaluate our method on problems of Bayesian active learning for large binary classification models, and show how to address settings with many candidates and sequential inclusion steps.},
  pages = {761-768},
  editors = {Danyluk, A. , L. Bottou, M. Littman},
  publisher = {ACM Press},
  organization = {Max-Planck-Gesellschaft},
  school = {Biologische Kybernetik},
  address = {New York, NY, USA},
  month = jun,
  year = {2009},
  slug = {5864},
  author = {Nickisch, H. and Seeger, MW.},
  month_numeric = {6}
}