Empirische Inferenz Conference Paper 2009

An Expectation Maximization Algorithm for Continuous Markov Decision Processes with Arbitrary Reward

We derive a new expectation maximization algorithm for policy optimization in linear Gaussian Markov decision processes, where the reward function is parameterised in terms of a flexible mixture of Gaussians. This approach exploits both analytical tractability and numerical optimization. Consequently, on the one hand, it is more flexible and general than closed-form solutions, such as the widely used linear quadratic Gaussian (LQG) controllers. On the other hand, it is more accurate and faster than optimization methods that rely on approximation and simulation. Partial analytical solutions (though costly) eliminate the need for simulation and, hence, avoid approximation error. The experiments will show that for the same cost of computation, policy optimization methods that rely on analytical tractability have higher value than the ones that rely on simulation.

Author(s): Hoffman, M. and Freitas, ND. and Doucet, A. and Peters, J.
Book Title: JMLR Workshop and Conference Proceedings Volume 5: AISTATS 2009
Journal: Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics (AIStats 2009)
Pages: 232-239
Year: 2009
Month: April
Day: 0
Editors: van Dyk, D. , M. Welling
Publisher: MIT Press
Bibtex Type: Conference Paper (inproceedings)
Address: Cambridge, MA, USA
Event Name: Twelfth International Conference on Artificial Intelligence and Statistics
Event Place: Clearwater Beach, FL, USA
Digital: 0
Electronic Archiving: grant_archive
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik
Links:

BibTex

@inproceedings{5658,
  title = {An Expectation Maximization Algorithm for Continuous Markov Decision Processes with Arbitrary Reward},
  journal = {Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics (AIStats 2009)},
  booktitle = {JMLR Workshop and Conference Proceedings Volume 5: AISTATS 2009},
  abstract = {We derive a new expectation maximization algorithm for policy optimization in linear Gaussian Markov decision processes, where the reward function is parameterised in terms of a flexible mixture of Gaussians. This approach exploits both analytical tractability and numerical optimization. Consequently, on the one hand, it is more flexible and general than closed-form solutions, such as the widely used linear quadratic Gaussian (LQG) controllers. On the other hand, it is more accurate and faster than optimization methods that rely on approximation and simulation. Partial analytical solutions (though costly) eliminate the need for simulation and, hence, avoid approximation error. The experiments will show that for the same cost of computation, policy optimization methods that rely on analytical tractability have higher value than the ones that rely on simulation.},
  pages = {232-239},
  editors = {van Dyk, D. , M. Welling},
  publisher = {MIT Press},
  organization = {Max-Planck-Gesellschaft},
  school = {Biologische Kybernetik},
  address = {Cambridge, MA, USA},
  month = apr,
  year = {2009},
  slug = {5658},
  author = {Hoffman, M. and Freitas, ND. and Doucet, A. and Peters, J.},
  month_numeric = {4}
}