Conference Paper

Goal-conditioned Offline Planning from Curious Exploration

Curiosity has established itself as a powerful exploration strategy in deep reinforcement learning. Notably, leveraging expected future novelty as intrinsic motivation has been shown to efficiently generate exploratory trajectories, as well as a robust dynamics model. We consider the challenge of extracting goal-conditioned behavior from the products of such unsupervised exploration techniques, without any additional environment interaction. We find that conventional goal-conditioned reinforcement learning approaches for extracting a value function and policy fall short in this difficult offline setting. By analyzing the geometry of optimal goal-conditioned value functions, we relate this issue to a specific class of estimation artifacts in learned values. In order to mitigate their occurrence, we propose to combine model-based planning over learned value landscapes with a graph-based value aggregation scheme. We show how this combination can correct both local and global artifacts, obtaining significant improvements in zero-shot goal-reaching performance across diverse simulated environments.

Author(s): Marco Bagatella and Georg Martius
Bibtex Type: Conference Paper (inproceedings)
Event Name: Advances in Neural Information Processing Systems 36
Event Place: New Orleans, USA
URL: https://openreview.net/forum?id=QlbZabgMdK
Electronic Archiving: grant_archive

BibTex

@inproceedings{bagatella2023goal-conditioned,
  title = {Goal-conditioned Offline Planning from Curious Exploration},
  abstract = {Curiosity has established itself as a powerful exploration strategy in deep reinforcement learning. Notably, leveraging expected future novelty as intrinsic motivation has been shown to efficiently generate exploratory trajectories, as well as a robust dynamics model. We consider the challenge of extracting goal-conditioned behavior from the products of such unsupervised exploration techniques, without any additional environment interaction. We find that conventional goal-conditioned reinforcement learning approaches for extracting a value function and policy fall short in this difficult offline setting. By analyzing the geometry of optimal goal-conditioned value functions, we relate this issue to a specific class of estimation artifacts in learned values. In order to mitigate their occurrence, we propose to combine model-based planning over learned value landscapes with a graph-based value aggregation scheme. We show how this combination can correct both local and global artifacts, obtaining significant improvements in zero-shot goal-reaching performance across diverse simulated environments.},
  slug = {bagatella2023goal-conditioned-7fe98755-c22d-4c47-93a6-bb8154a04617},
  author = {Bagatella, Marco and Martius, Georg},
  url = {https://openreview.net/forum?id=QlbZabgMdK}
}