Empirical Inference Poster 2010

Reinforcement Learning by Relative Entropy Policy Search

Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred by premature convergence and implausible solutions. As first suggested in the context of covariant policy gradients, many of these problems may be addressed by constraining the information loss. In this book chapter, we continue this path of reasoning and suggest the Relative Entropy Policy Search (REPS) method. The resulting method differs significantly from previous policy gradient approaches and yields an exact update step. It works well on typical reinforcement learning benchmark problems. We will also present a real-world applications where a robot employs REPS to learn how to return balls in a game of table tennis.

Author(s): Peters, J. and Mülling, K. and Altun, Y.
Journal: 30th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2010)
Volume: 30
Pages: 69
Year: 2010
Month: July
Day: 0
Bibtex Type: Poster (poster)
Digital: 0
Electronic Archiving: grant_archive
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik
Links:

BibTex

@poster{6746,
  title = {Reinforcement Learning by Relative Entropy Policy Search},
  journal = {30th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2010)},
  abstract = {Policy search is a successful approach to reinforcement learning. However, policy
  improvements often result in the loss of information. Hence, it has been marred by
  premature convergence and implausible solutions. As first suggested in the context of
  covariant policy gradients, many of these problems may be addressed by constraining
  the information loss. In this book chapter, we continue this path of reasoning and suggest
  the Relative Entropy Policy Search (REPS) method. The resulting method differs
  significantly from previous policy gradient approaches and yields an exact update step.
  It works well on typical reinforcement learning benchmark problems. We will also
  present a real-world applications where a robot employs REPS to learn how to return balls in a game of table tennis.},
  volume = {30},
  pages = {69},
  organization = {Max-Planck-Gesellschaft},
  school = {Biologische Kybernetik},
  month = jul,
  year = {2010},
  slug = {6746},
  author = {Peters, J. and M{\"u}lling, K. and Altun, Y.},
  month_numeric = {7}
}