Autonomous Learning Conference Paper 2021

Self-supervised Visual Reinforcement Learning with Object-centric Representations

Featured image

Autonomous agents need large repertoires of skills to act reasonably on new tasks that they have not seen before. However, acquiring these skills using only a stream of high-dimensional, unstructured, and unlabeled observations is a tricky challenge for any autonomous agent. Previous methods have used variational autoencoders to encode a scene into a low-dimensional vector that can be used as a goal for an agent to discover new skills. Nevertheless, in compositional/multi-object environments it is difficult to disentangle all the factors of variation into such a fixed-length representation of the whole scene. We propose to use object-centric representations as a modular and structured observation space, which is learned with a compositional generative world model. We show that the structure in the representations in combination with goal-conditioned attention policies helps the autonomous agent to discover and learn useful skills. These skills can be further combined to address compositional tasks like the manipulation of several different objects.

Author(s): Andrii Zadaianchuk* and Maximilian Seitzer* and Georg Martius
Book Title: 9th International Conference on Learning Representations (ICLR 2021)
Year: 2021
Month: May
Project(s):
Bibtex Type: Conference Paper (inproceedings)
Electronic Archiving: grant_archive
Note: *equal contribution
Links:

BibTex

@inproceedings{zadaianchuk2021:visual-rl,
  title = {Self-supervised Visual Reinforcement Learning with Object-centric Representations},
  booktitle = {9th International Conference on Learning Representations (ICLR 2021)},
  abstract = {Autonomous agents need large repertoires of skills to act reasonably on new tasks that they have not seen before. However, acquiring these skills using only a stream of high-dimensional, unstructured, and unlabeled observations is a tricky challenge for any autonomous agent. Previous methods have used variational autoencoders to encode a scene into a low-dimensional vector that can be used as a goal for an agent to discover new skills. Nevertheless, in compositional/multi-object environments it is difficult to disentangle all the factors of variation into such a fixed-length representation of the whole scene. We propose to use object-centric representations as a modular and structured observation space, which is learned with a compositional generative world model.
  We show that the structure in the representations in combination with goal-conditioned attention policies helps the autonomous agent to discover and learn useful skills. These skills can be further combined to address compositional tasks like the manipulation of several different objects.},
  month = may,
  year = {2021},
  note = {*equal contribution},
  slug = {sel-sup-vis-rei-lea-wit-obj-rep},
  author = {Zadaianchuk*, Andrii and Seitzer*, Maximilian and Martius, Georg},
  month_numeric = {5}
}