Empirical Inference Article 2008

Natural Actor-Critic

In this paper, we suggest a novel reinforcement learning architecture, the Natural Actor-Critic. The actor updates are achieved using stochastic policy gradients em- ploying Amari’s natural gradient approach, while the critic obtains both the natural policy gradient and additional parameters of a value function simultaneously by lin- ear regression. We show that actor improvements with natural policy gradients are particularly appealing as these are independent of coordinate frame of the chosen policy representation, and can be estimated more efficiently than regular policy gra- dients. The critic makes use of a special basis function parameterization motivated by the policy-gradient compatible function approximation. We show that several well-known reinforcement learning methods such as the original Actor-Critic and Bradtke’s Linear Quadratic Q-Learning are in fact Natural Actor-Critic algorithms. Empirical evaluations illustrate the effectiveness of our techniques in comparison to previous methods, and also demonstrate their applicability for learning control on an anthropomorphic robot arm.

Author(s): Peters, J. and Schaal, S.
Journal: Neurocomputing
Volume: 71
Number (issue): 7-9
Pages: 1180-1190
Year: 2008
Month: March
Day: 0
Bibtex Type: Article (article)
DOI: 10.1016/j.neucom.2007.11.026
Digital: 0
Electronic Archiving: grant_archive
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik
Links:

BibTex

@article{4863,
  title = {Natural Actor-Critic},
  journal = {Neurocomputing},
  abstract = {In this paper, we suggest a novel reinforcement learning architecture, the Natural
  Actor-Critic. The actor updates are achieved using stochastic policy gradients em-
  ploying Amari’s natural gradient approach, while the critic obtains both the natural
  policy gradient and additional parameters of a value function simultaneously by lin-
  ear regression. We show that actor improvements with natural policy gradients are
  particularly appealing as these are independent of coordinate frame of the chosen
  policy representation, and can be estimated more efficiently than regular policy gra-
  dients. The critic makes use of a special basis function parameterization motivated
  by the policy-gradient compatible function approximation. We show that several
  well-known reinforcement learning methods such as the original Actor-Critic and
  Bradtke’s Linear Quadratic Q-Learning are in fact Natural Actor-Critic algorithms.
  Empirical evaluations illustrate the effectiveness of our techniques in comparison to
  previous methods, and also demonstrate their applicability for learning control on
  an anthropomorphic robot arm.},
  volume = {71},
  number = {7-9},
  pages = {1180-1190},
  organization = {Max-Planck-Gesellschaft},
  school = {Biologische Kybernetik},
  month = mar,
  year = {2008},
  slug = {4863},
  author = {Peters, J. and Schaal, S.},
  month_numeric = {3}
}