Reinforcement Learning and Control
Model-based Reinforcement Learning and Planning
Object-centric Self-supervised Reinforcement Learning
Self-exploration of Behavior
Causal Reasoning in RL
Equation Learner for Extrapolation and Control
Intrinsically Motivated Hierarchical Learner
Regularity as Intrinsic Reward for Free Play
Curious Exploration via Structured World Models Yields Zero-Shot Object Manipulation
Natural and Robust Walking from Generic Rewards
Goal-conditioned Offline Planning
Offline Diversity Under Imitation Constraints
Learning Diverse Skills for Local Navigation
Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations
Combinatorial Optimization as a Layer / Blackbox Differentiation
Object-centric Self-supervised Reinforcement Learning
Symbolic Regression and Equation Learning
Representation Learning
Stepsize adaptation for stochastic optimization
Probabilistic Neural Networks
Learning with 3D rotations: A hitchhiker’s guide to SO(3)
Research Overview

The Autonomous Learning group is focused on developing learning methods for robots to adapt to complex and changing environments. Their research is inspired by the self-organization and learning of mammals emphasizing on autonomous learning and development. The group is tackling fundamental challenges in reinforcement learning, representation learning and haptic sensation. Examples are intrinsically motived learning, learning to control systems with muscles, and creating new haptic sensors for robotic hands that use machine learning.
Our research focuses on learning-based robot control to create autonomously learning robotic agents, aiming to surpass preprogrammed robotics limitations, and pave the way for developing versatile assistants for humans. We are inspired by natural systems that undergo a development. Our efforts span multiple research domains.
In intrinsically motivated learning, we explore reinforcement learning techniques resembling children's play, allowing robots to autonomously and efficiently navigate their environments. We've investigated intrinsic motivations like learning progress, surprise, and causal influence [] to enhance robotics exploration. Recent advances in model-based reinforcement learning lead to unprecedented exploration efficiency and zero-shot task generalization [
]. Incorporating regularity as a drive results in more structured behavior []. Vision language models are also utilized to boost exploration [
].
We examine muscle-like actuations over traditional motors. Our findings suggest muscles contribute to faster learning and superior robustness [], and we've replicated this using modern direct-drive motors []. We also advance RL methods to scale to human-like walking in complex muscle-driven simulations using basic reward terms [].
In causal representation learning, we explore visual scene representations for robotics tasks, emphasizing causally accurate predictions. Our research shows object-centric representations aid reinforcement learning and unsupervised goal achievement []. Recent progress scales these methods to real-world videos [
]. We also investigate how an agent can detect its causal influence on the environment [
] and use it to drastically improve sample efficiency in RL in manipulation problems. The causal influence detection can also be used for data-augmentation to increase generalization to unseen configurations of the environment [
].