Empirical Inference Conference Paper 2009

Policy Search for Motor Primitives in Robotics

Many motor skills in humanoid robotics can be learned using parametrized motor primitives as done in imitation learning. However, most interesting motor learning problems are high-dimensional reinforcement learning problems often beyond the reach of current methods. In this paper, we extend previous work on policy learning from the immediate reward case to episodic reinforcement learning. We show that this results into a general, common framework also connected to policy gradient methods and yielding a novel algorithm for policy learning by assuming a form of exploration that is particularly well-suited for dynamic motor primitives. The resulting algorithm is an EM-inspired algorithm applicable in complex motor learning tasks. We compare this algorithm to alternative parametrized policy search methods and show that it outperforms previous methods. We apply it in the context of motor learning and show that it can learn a complex Ball-in-a-Cup task using a real Barrett WAM robot arm.

Author(s): Kober, J. and Peters, J.
Book Title: Advances in neural information processing systems 21
Journal: Advances in neural information processing systems 21 : 22nd Annual Conference on Neural Information Processing Systems 2008
Pages: 849-856
Year: 2009
Month: June
Day: 0
Editors: Koller, D. , D. Schuurmans, Y. Bengio, L. Bottou
Publisher: Curran
Bibtex Type: Conference Paper (inproceedings)
Address: Red Hook, NY, USA
Event Name: Twenty-Second Annual Conference on Neural Information Processing Systems (NIPS 2008)
Event Place: Vancouver, BC, Canada
Digital: 0
Electronic Archiving: grant_archive
ISBN: 978-1-605-60949-2
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik
Links:

BibTex

@inproceedings{5411,
  title = {Policy Search for Motor Primitives in Robotics},
  journal = {Advances in neural information processing systems 21 : 22nd Annual Conference on Neural Information Processing Systems 2008},
  booktitle = {Advances in neural information processing systems 21},
  abstract = {Many motor skills in humanoid robotics can be learned using parametrized motor primitives as done in imitation learning. However, most interesting motor learning problems are high-dimensional reinforcement learning problems often beyond the reach of current methods. In this paper, we extend previous work on policy learning from the immediate reward case to episodic reinforcement learning. We show that this results into a general, common framework also connected to policy gradient methods and yielding a novel algorithm for policy learning by assuming a form of exploration that is particularly well-suited for dynamic motor primitives. The resulting algorithm is an EM-inspired algorithm applicable in complex motor learning tasks. We compare this algorithm to alternative parametrized policy search methods and show that it outperforms previous methods. We apply it in the context of motor learning and show that it can learn a complex Ball-in-a-Cup task using a real Barrett WAM robot arm.},
  pages = {849-856},
  editors = {Koller, D. , D. Schuurmans, Y. Bengio, L. Bottou},
  publisher = {Curran},
  organization = {Max-Planck-Gesellschaft},
  school = {Biologische Kybernetik},
  address = {Red Hook, NY, USA},
  month = jun,
  year = {2009},
  slug = {5411},
  author = {Kober, J. and Peters, J.},
  month_numeric = {6}
}