Reinforcement Learning and Control
Model-based Reinforcement Learning and Planning
Object-centric Self-supervised Reinforcement Learning
Self-exploration of Behavior
Causal Reasoning in RL
Equation Learner for Extrapolation and Control
Intrinsically Motivated Hierarchical Learner
Regularity as Intrinsic Reward for Free Play
Curious Exploration via Structured World Models Yields Zero-Shot Object Manipulation
Natural and Robust Walking from Generic Rewards
Goal-conditioned Offline Planning
Offline Diversity Under Imitation Constraints
Learning Diverse Skills for Local Navigation
Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations
Combinatorial Optimization as a Layer / Blackbox Differentiation
Object-centric Self-supervised Reinforcement Learning
Symbolic Regression and Equation Learning
Representation Learning
Stepsize adaptation for stochastic optimization
Probabilistic Neural Networks
Learning with 3D rotations: A hitchhiker’s guide to SO(3)
Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations

We present a novel approach, called Wasserstein Adversarial Behavior Imitation (WASABI), for learning agile robotic skills from partial and physically incompatible demonstrations without requiring explicit reward functions or expert demonstrations. The method employs an adversarial imitation learning framework using a Wasserstein GAN formulation to infer rewards from rough demonstrations, where the robot's motion can be guided to successfully imitate difficult behaviors such as backflips. Through extensive experiments on a quadruped robot called Solo 8 and a cross-platform application on ANYmal, WASABI demonstrates robust policy extraction and achieves high-fidelity skill replication in both simulation and real-world environments. The paper highlights the limitations of previous methods, such as LSGAN, in learning highly dynamic skills and shows how WASABI effectively addresses these challenges by directly learning a more informative reward function. In addition, the we explore cross-platform imitation capabilities, suggesting WASABI's versatility in enabling one robotic platform to imitate demonstrations recorded by another. We provide experimental results showing that WASABI not only outperforms state-of-the-art methods on various challenging tasks, but also maintains stable learning performance, opening up new opportunities for imitation learning in robotics.
Members
Publications