In the past decade, numerous machine learning algorithms have been shown to successfully learn optimal policies to control real robotic systems. However, it is common to encounter failing behaviors as the learning loop progresses. Specifically, in robot applications where failing is undesired but not catastrophic, many algorithms struggle with leveraging data obtained from failures. This is usually caused by (i) the failed experiment ending prematurely, or (ii) the acquired data being scarce or corrupted. Both complicate the design of proper reward functions to penalize failures. In this paper, we propose a framework that addresses those issues. We consider failing behaviors as those that violate a constraint and address the problem of learning with crash constraints, where no data is obtained upon constraint violation. The no-data case is addressed by a novel GP model (GPCR) for the constraint that combines discrete events (failure/success) with continuous observations (only obtained upon success). We demonstrate the effectiveness of our framework on simulated benchmarks and on a real jumping quadruped, where the constraint threshold is unknown a priori. Experimental data is collected, by means of constrained Bayesian optimization, directly on the real robot. Our results outperform manual tuning and GPCR proves useful on estimating the constraint threshold.

Author(s): Marco, A. and Baumann, D. and Khadiv, M. and Hennig, P. and Righetti, L. and Trimpe, S.
Journal: IEEE Robotics and Automation Letters
Volume: 6
Number (issue): 2
Pages: 1439--1446
Year: 2021
Month: February
Publisher: IEEE
Bibtex Type: Article (article)
DOI: 10.1109/LRA.2021.3057055
State: Published
URL: https://ieeexplore.ieee.org/document/9345965
Digital: True
Electronic Archiving: grant_archive

BibTex

@article{marco2021robot,
  title = {Robot Learning with Crash Constraints},
  journal = { IEEE Robotics and Automation Letters},
  abstract = {In the past decade, numerous machine learning algorithms have been shown to successfully learn optimal policies to control real robotic systems. However, it is common to encounter failing behaviors as the learning loop progresses. Specifically, in robot applications where failing is undesired but not catastrophic, many algorithms struggle with leveraging data obtained from failures. This is usually caused by (i) the failed experiment ending prematurely, or (ii) the acquired data being scarce or corrupted. Both complicate the design of proper reward functions to penalize failures. In this paper, we propose a framework that addresses those issues. We consider failing behaviors as those that violate a constraint and address the problem of learning with crash constraints, where no data is obtained upon constraint violation. The no-data case is addressed by a novel GP model (GPCR) for the constraint that combines discrete events (failure/success) with continuous observations (only obtained upon success). We demonstrate the effectiveness of our framework on simulated benchmarks and on a real jumping quadruped, where the constraint threshold is unknown a priori. Experimental data is collected, by means of constrained Bayesian optimization, directly on the real robot. Our results outperform manual tuning and GPCR proves useful on estimating the constraint threshold.},
  volume = {6},
  number = {2},
  pages = {1439--1446},
  publisher = {IEEE},
  month = feb,
  year = {2021},
  slug = {marco2021robot},
  author = {Marco, A. and Baumann, D. and Khadiv, M. and Hennig, P. and Righetti, L. and Trimpe, S.},
  url = {https://ieeexplore.ieee.org/document/9345965},
  month_numeric = {2}
}