Empirische Inferenz Conference Paper 2008

Adaptive Importance Sampling with Automatic Model Selection in Value Function Approximation

Off-policy reinforcement learning is aimed at efficiently reusing data samples gathered in the past, which is an essential problem for physically grounded AI as experiments are usually prohibitively expensive. A common approach is to use importance sampling techniques for compensating for the bias caused by the difference between data-sampling policies and the target policy. However, existing off-policy methods do not often take the variance of value function estimators explicitly into account and therefore their performance tends to be unstable. To cope with this problem, we propose using an adaptive importance sampling technique which allows us to actively control the trade-off between bias and variance. We further provide a method for optimally determining the trade-off parameter based on a variant of cross-validation. We demonstrate the usefulness of the proposed approach through simulations.

Author(s): Hachiya, H. and Akiyama, T. and Sugiyama, M. and Peters, J.
Book Title: AAAI 2008
Journal: Proceedings of the Twenty-Third Conference on Artificial Intelligence (AAAI 2008)
Pages: 1351-1356
Year: 2008
Month: July
Day: 0
Editors: Fox, D. , C. P. Gomes
Publisher: AAAI Press
Bibtex Type: Conference Paper (inproceedings)
Address: Menlo Park, CA, USA
Event Name: Twenty-Third Conference on Artificial Intelligence
Event Place: Chicago, IL, USA
Digital: 0
Electronic Archiving: grant_archive
Institution: AAAI
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik
Links:

BibTex

@inproceedings{5096,
  title = {Adaptive Importance Sampling with Automatic Model Selection in Value Function Approximation},
  journal = {Proceedings of the Twenty-Third Conference on Artificial Intelligence (AAAI 2008)},
  booktitle = {AAAI 2008},
  abstract = {Off-policy reinforcement learning is aimed at efficiently reusing data samples gathered in the past, which is an essential problem for physically grounded AI as experiments are usually prohibitively expensive. A common approach is to use importance sampling techniques for compensating for the bias caused by the difference between data-sampling policies and the target policy. However, existing off-policy methods do not often take the variance of value function estimators explicitly into account and therefore their performance tends to be unstable. To cope with this problem, we propose using an adaptive importance sampling technique which allows us to actively control the trade-off between bias and variance. We further provide a method for optimally determining the trade-off parameter based on a variant of cross-validation. We demonstrate the usefulness of the proposed approach through simulations.},
  pages = {1351-1356},
  editors = {Fox, D. , C. P. Gomes},
  publisher = {AAAI Press},
  organization = {Max-Planck-Gesellschaft},
  institution = {AAAI},
  school = {Biologische Kybernetik},
  address = {Menlo Park, CA, USA},
  month = jul,
  year = {2008},
  slug = {5096},
  author = {Hachiya, H. and Akiyama, T. and Sugiyama, M. and Peters, J.},
  month_numeric = {7}
}