Autonomous Motion Conference Paper 2016

Optimizing for what matters: the Top Grasp Hypothesis

Ranking top 1

In this paper, we consider the problem of robotic grasping of objects when only partial and noisy sensor data of the environment is available. We are specifically interested in the problem of reliably selecting the best hypothesis from a whole set. This is commonly the case when trying to grasp an object for which we can only observe a partial point cloud from one viewpoint through noisy sensors. There will be many possible ways to successfully grasp this object, and even more which will fail. We propose a supervised learning method that is trained with a ranking loss. This explicitly encourages that the top-ranked training grasp in a hypothesis set is also positively labeled. We show how we adapt the standard ranking loss to work with data that has binary labels and explain the benefits of this formulation. Additionally, we show how we can efficiently optimize this loss with stochastic gradient descent. In quantitative experiments, we show that we can outperform previous models by a large margin.

Author(s): Kappler, Daniel and Schaal, Stefan and Bohg, Jeannette
Book Title: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2016
Year: 2016
Month: May
Day: 16-21
Publisher: IEEE
Project(s):
Bibtex Type: Conference Paper (inproceedings)
DOI: 10.1109/ICRA.2016.7487367
Event Name: IEEE International Conference on Robotics and Automation
Event Place: Stockholm, Sweden
State: Published
Electronic Archiving: grant_archive
Attachments:

BibTex

@inproceedings{daniel_ICRA_2016,
  title = {Optimizing for what matters: the Top Grasp Hypothesis},
  booktitle = {Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2016},
  abstract = {In this paper, we consider the problem of robotic grasping of objects when only partial and noisy sensor data of the environment is available. We are specifically interested in the problem of reliably selecting the best hypothesis from a whole set. This is commonly the case when trying to grasp an object for which we can only observe a partial point cloud from one viewpoint through noisy sensors. There will be many possible ways to successfully grasp this object, and even more which will fail. We propose a supervised learning method that is trained with a ranking loss. This explicitly encourages that the top-ranked training grasp in a hypothesis set is also positively labeled. We show how we adapt the standard ranking loss to work with data that has binary labels and explain the benefits of this formulation. Additionally, we show how we can efficiently optimize this loss with stochastic gradient descent. In quantitative experiments, we show that we can outperform previous models by a large margin.},
  publisher = {IEEE},
  month = may,
  year = {2016},
  slug = {daniel_icra_2016},
  author = {Kappler, Daniel and Schaal, Stefan and Bohg, Jeannette},
  month_numeric = {5}
}