Perceiving Systems Conference Paper 2015

Consensus Message Passing for Layered Graphical Models

Jampani15aistats teaser

Generative models provide a powerful framework for probabilistic reasoning. However, in many domains their use has been hampered by the practical difficulties of inference. This is particularly the case in computer vision, where models of the imaging process tend to be large, loopy and layered. For this reason bottom-up conditional models have traditionally dominated in such domains. We find that widely-used, general-purpose message passing inference algorithms such as Expectation Propagation (EP) and Variational Message Passing (VMP) fail on the simplest of vision models. With these models in mind, we introduce a modification to message passing that learns to exploit their layered structure by passing 'consensus' messages that guide inference towards good solutions. Experiments on a variety of problems show that the proposed technique leads to significantly more accurate inference results, not only when compared to standard EP and VMP, but also when compared to competitive bottom-up conditional models.

Author(s): Varun Jampani and S. M. Ali Eslami and Daniel Tarlow and Pushmeet Kohli and John Winn
Book Title: Eighteenth International Conference on Artificial Intelligence and Statistics (AISTATS)
Volume: 38
Pages: 425--433
Year: 2015
Month: May
Publisher: JMLR Workshop and Conference Proceedings
Bibtex Type: Conference Paper (inproceedings)
Event Name: Eighteenth International Conference on Artificial Intelligence and Statistics
Event Place: San Diego, USA
URL: http://www.aistats.org
Electronic Archiving: grant_archive

BibTex

@inproceedings{jampani15aistats,
  title = {Consensus Message Passing for Layered Graphical Models},
  booktitle = {Eighteenth International Conference on Artificial Intelligence and Statistics (AISTATS)},
  abstract = {Generative models provide a powerful framework for probabilistic reasoning. However, in many domains their use has been hampered by the practical difficulties of inference. This is particularly the case in computer vision, where models of the imaging process tend to be large, loopy and layered. For this reason bottom-up conditional models have traditionally dominated in such domains. We find that widely-used, general-purpose message passing inference algorithms such as Expectation Propagation (EP) and Variational Message Passing (VMP) fail on the simplest of vision models. With these models in mind, we introduce a modification to message passing that learns to exploit their layered structure by passing 'consensus' messages that guide inference towards good solutions. Experiments on a variety of problems show that the proposed technique leads to significantly more accurate inference results, not only when compared to standard EP and VMP, but also when compared to competitive bottom-up conditional models.},
  volume = {38},
  pages = {425--433},
  publisher = {JMLR Workshop and Conference Proceedings},
  month = may,
  year = {2015},
  slug = {jampani15aistats},
  author = {Jampani, Varun and Eslami, S. M. Ali and Tarlow, Daniel and Kohli, Pushmeet and Winn, John},
  url = {http://www.aistats.org},
  month_numeric = {5}
}