Empirical Inference Article 2010

Parameter-exploring policy gradients

We present a model-free reinforcement learning method for partially observable Markov decision problems. Our method estimates a likelihood gradient by sampling directly in parameter space, which leads to lower variance gradient estimates than obtained by regular policy gradient methods. We show that for several complex control tasks, including robust standing with a humanoid robot, this method outperforms well-known algorithms from the fields of standard policy gradients, finite difference methods and population based heuristics. We also show that the improvement is largest when the parameter samples are drawn symmetrically. Lastly we analyse the importance of the individual components of our method by incrementally incorporating them into the other algorithms, and measuring the gain in performance after each step.

Author(s): Sehnke, F. and Osendorfer, C. and Rückstiess, T. and Graves, A. and Peters, J. and Schmidhuber, J.
Journal: Neural Networks
Volume: 21
Number (issue): 4
Pages: 551-559
Year: 2010
Month: May
Day: 0
Bibtex Type: Article (article)
DOI: 10.1016/j.neunet.2009.12.004
Digital: 0
Electronic Archiving: grant_archive
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik
Links:

BibTex

@article{6154,
  title = {Parameter-exploring policy gradients},
  journal = {Neural Networks},
  abstract = {We present a model-free reinforcement learning method for partially observable Markov decision problems. Our method estimates a likelihood gradient by sampling directly in parameter space, which leads to lower variance gradient estimates than obtained by regular policy gradient methods. We show that for several complex control tasks, including robust standing with a humanoid robot, this method outperforms well-known algorithms from the fields of standard policy gradients, finite difference methods and population based heuristics. We also show that the improvement is largest when the parameter samples are drawn symmetrically. Lastly we analyse the importance of the individual components of our method by incrementally incorporating them into the other algorithms, and measuring the gain in performance after each step.},
  volume = {21},
  number = {4},
  pages = {551-559},
  organization = {Max-Planck-Gesellschaft},
  school = {Biologische Kybernetik},
  month = may,
  year = {2010},
  slug = {6154},
  author = {Sehnke, F. and Osendorfer, C. and R{\"u}ckstiess, T. and Graves, A. and Peters, J. and Schmidhuber, J.},
  month_numeric = {5}
}