Empirical Inference Conference Paper 2009

Efficient data reuse in value function approximation

Off-policy reinforcement learning is aimed at efficiently using data samples gathered from a policy that is different from the currently optimized policy. A common approach is to use importance sampling techniques for compensating for the bias of value function estimators caused by the difference between the data-sampling policy and the target policy. However, existing off-policy methods often do not take the variance of the value function estimators explicitly into account and therefore their performance tends to be unstable. To cope with this problem, we propose using an adaptive importance sampling technique which allows us to actively control the trade-off between bias and variance. We further provide a method for optimally determining the trade-off parameter based on a variant of cross-validation. The usefulness of the proposed approach is demonstrated through simulated swing-up inverted-pendulum problem.

Author(s): Hachiya, H. and Akiyama, T. and Sugiyama, M. and Peters, J.
Book Title: IEEE International Symposium on Adaptive Dynamic Programming and Reinforcement Learning
Journal: Proceedings of the 2009 IEEE International Symposium on Adaptive Dynamic Programming and Reinforcement Learning (IEEE ADPRL 2009)
Pages: 8-15
Year: 2009
Month: May
Day: 0
Publisher: IEEE Service Center
Bibtex Type: Conference Paper (inproceedings)
Address: Piscataway, NJ, USA
DOI: 10.1109/ADPRL.2009.4927519
Event Name: IEEE ADPRL 2009
Event Place: Nashville, TN, USA
Electronic Archiving: grant_archive
Institution: Institute of Electrical and Electronics Engineers
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik
Links:

BibTex

@inproceedings{5771,
  title = {Efficient data reuse in value function approximation},
  journal = {Proceedings of the 2009 IEEE International Symposium on Adaptive Dynamic Programming and Reinforcement Learning (IEEE ADPRL 2009)},
  booktitle = {IEEE International Symposium on Adaptive Dynamic Programming and Reinforcement Learning},
  abstract = {Off-policy reinforcement learning is aimed at efficiently using data samples gathered from a policy that is different from the currently optimized policy. A common approach is to use importance sampling techniques for compensating for the bias of value function estimators caused by the difference between the data-sampling policy and the target policy. However, existing off-policy methods often do not take the variance of the value function estimators explicitly into account and therefore their performance tends to be unstable. To cope with this problem, we propose using an adaptive importance sampling technique which allows us to actively control the trade-off between bias and variance. We further provide a method for optimally determining the trade-off parameter based on a variant of cross-validation. The usefulness of the proposed approach is demonstrated through simulated swing-up inverted-pendulum problem.},
  pages = {8-15},
  publisher = {IEEE Service Center},
  organization = {Max-Planck-Gesellschaft},
  institution = {Institute of Electrical and Electronics Engineers},
  school = {Biologische Kybernetik},
  address = {Piscataway, NJ, USA},
  month = may,
  year = {2009},
  slug = {5771},
  author = {Hachiya, H. and Akiyama, T. and Sugiyama, M. and Peters, J.},
  month_numeric = {5}
}