Autonomous Robotic Manipulation
Modeling Top-Down Saliency for Visual Object Search
Interactive Perception
State Estimation and Sensor Fusion for the Control of Legged Robots
Probabilistic Object and Manipulator Tracking
Global Object Shape Reconstruction by Fusing Visual and Tactile Data
Robot Arm Pose Estimation as a Learning Problem
Learning to Grasp from Big Data
Gaussian Filtering as Variational Inference
Template-Based Learning of Model Free Grasping
Associative Skill Memories
Real-Time Perception meets Reactive Motion Generation
Autonomous Robotic Manipulation
Learning Coupling Terms of Movement Primitives
State Estimation and Sensor Fusion for the Control of Legged Robots
Inverse Optimal Control
Motion Optimization
Optimal Control for Legged Robots
Movement Representation for Reactive Behavior
Associative Skill Memories
Real-Time Perception meets Reactive Motion Generation
Probabilistic Object and Manipulator Tracking

Hand-eye coordination is crucial for capable manipulation of objects. It requires to know the manipulator's and the objects' locations. These locations have to be inferred from sensory data. In this project we work with range sensors, which are wide spread in robotics and provide dense depth images.
The objective is to continuously infer the 6-DoF poses of all objects involved at the frame rate of the incoming depth images. This includes objects the robot is interacting with, as well as the links of its own manipulators. This problem poses a number of challenges that are difficult to address with standard Bayesian filtering methods:
- The measurement, i.e. the dense depth image, is high-dimensional. We therefore investigate how approximate inference can be performed efficiently, e.g. by imposing factorization in the pixels.
- Measurements come from multiple modalities, at different rates and with a relative delay. We propose filtering methods that leverage the available knowledge to a maximum. [
]
- The measurement process is very noisy. We are working on robustification of Kalman Filtering methods. [
] [
]
- Occlusions of objects are pervasive in the context of manipulation. We developed a model of the depth image generation which takes occlusion explicitly into account which proved to greatly improve robustness. [
]
- The state is high dimensional if many objects are involved, or if the robot has many joints. Therefore, we have worked on an extension of the Particle Filter which scales better with the dimensionality of the state space, for certain dynamical systems. [
]
Our algorithms are released as open source code and they are tested on datasets annotated with ground truth. Furthermore, the algorithms developed provide a basis for research on robotic manipulation. We have shown their integration into full robotic systems.
Members
Publications