A Sequential Group VAE for Robot Learning of Haptic Representations
Haptic representation learning is a difficult task in robotics because information can be gathered only by actively exploring the environment over time, and because different actions elicit different object properties. We propose a Sequential Group VAE that leverages object persistence to learn and update latent general representations of multimodal haptic data. As a robot performs sequences of exploratory procedures on an object, the model accumulates data and learns to distinguish between general object properties, such as size and mass, and trial-to-trial variations, such as initial object position. We demonstrate that after very few observations, the general latent representations are sufficiently refined to accurately encode many haptic object properties.
Author(s): | Benjamin A. Richardson and Katherine J. Kuchenbecker and Georg Martius |
Pages: | 1--11 |
Year: | 2022 |
Month: | December |
Bibtex Type: | Miscellaneous (misc) |
Address: | Auckland, New Zealand |
Electronic Archiving: | grant_archive |
How Published: | Workshop paper (8 pages) presented at the CoRL Workshop on Aligning Robot Representations with Humans |
State: | Published |
URL: | https://aligning-robot-human-representations.github.io/docs/camready_11.pdf |
BibTex
@misc{Richardson22-CORLWS-Sequential, title = {A Sequential Group {VAE} for Robot Learning of Haptic Representations}, abstract = {Haptic representation learning is a difficult task in robotics because information can be gathered only by actively exploring the environment over time, and because different actions elicit different object properties. We propose a Sequential Group VAE that leverages object persistence to learn and update latent general representations of multimodal haptic data. As a robot performs sequences of exploratory procedures on an object, the model accumulates data and learns to distinguish between general object properties, such as size and mass, and trial-to-trial variations, such as initial object position. We demonstrate that after very few observations, the general latent representations are sufficiently refined to accurately encode many haptic object properties.}, pages = {1--11}, howpublished = {Workshop paper (8 pages) presented at the CoRL Workshop on Aligning Robot Representations with Humans}, address = {Auckland, New Zealand}, month = dec, year = {2022}, slug = {richardson22-corlws-sequential}, author = {Richardson, Benjamin A. and Kuchenbecker, Katherine J. and Martius, Georg}, url = {https://aligning-robot-human-representations.github.io/docs/camready_11.pdf}, month_numeric = {12} }