Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred by premature convergence and implausible solutions. As first suggested in the context of covariant policy gradients (Bagnell and Schneider 2003), many of these problems may be addressed by constraining the information loss. In this paper, we continue this path of reasoning and suggest the Relative Entropy Policy Search (REPS) method. The resulting method differs significantly from previous policy gradient approaches and yields an exact update step. It works well on typical reinforcement learning benchmark problems.

Author(s): Peters, J. and Mülling, K. and Altun, Y.
Journal: Proceedings of the Twenty-Fourth National Conference on Artificial Intelligence
Pages: 1607-1612
Year: 2010
Month: July
Day: 0
Editors: Fox, M. , D. Poole
Publisher: AAAI Press
Bibtex Type: Conference Paper (inproceedings)
Address: Menlo Park, CA, USA
Event Name: Twenty-Fourth National Conference on Artificial Intelligence (AAAI-10)
Event Place: Atlanta, GA, USA
Electronic Archiving: grant_archive
Institution: Association for the Advancement of Artificial Intelligence
ISBN: 978-1-577-35463-5
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik
Links:

BibTex

@inproceedings{6439,
  title = {Relative Entropy Policy Search},
  journal = {Proceedings of the Twenty-Fourth National Conference on Artificial Intelligence},
  abstract = {Policy search is a successful approach to reinforcement
  learning. However, policy improvements often result
  in the loss of information. Hence, it has been marred
  by premature convergence and implausible solutions.
  As first suggested in the context of covariant policy
  gradients (Bagnell and Schneider 2003), many of these
  problems may be addressed by constraining the information
  loss. In this paper, we continue this path of reasoning
  and suggest the Relative Entropy Policy Search
  (REPS) method. The resulting method differs significantly
  from previous policy gradient approaches and
  yields an exact update step. It works well on typical
  reinforcement learning benchmark problems.},
  pages = {1607-1612},
  editors = {Fox, M. , D. Poole},
  publisher = {AAAI Press},
  organization = {Max-Planck-Gesellschaft},
  institution = {Association for the Advancement of Artificial Intelligence},
  school = {Biologische Kybernetik},
  address = {Menlo Park, CA, USA},
  month = jul,
  year = {2010},
  slug = {6439},
  author = {Peters, J. and M{\"u}lling, K. and Altun, Y.},
  month_numeric = {7}
}