Autonomous Robotic Manipulation
Modeling Top-Down Saliency for Visual Object Search
Interactive Perception
State Estimation and Sensor Fusion for the Control of Legged Robots
Probabilistic Object and Manipulator Tracking
Global Object Shape Reconstruction by Fusing Visual and Tactile Data
Robot Arm Pose Estimation as a Learning Problem
Learning to Grasp from Big Data
Gaussian Filtering as Variational Inference
Template-Based Learning of Model Free Grasping
Associative Skill Memories
Real-Time Perception meets Reactive Motion Generation
Autonomous Robotic Manipulation
Learning Coupling Terms of Movement Primitives
State Estimation and Sensor Fusion for the Control of Legged Robots
Inverse Optimal Control
Motion Optimization
Optimal Control for Legged Robots
Movement Representation for Reactive Behavior
Associative Skill Memories
Real-Time Perception meets Reactive Motion Generation
Flowing Puppets

We approach the problem of estimating the pose of charactres in TV shows video sequences.
Our approach is based on optical flow. People are moving entities, and their motion has distinctive characteristics. Often the whole body has a distinctive motion from the background; considering the upper human body, it is also likely that the fastest moving parts are the hands. The articulated structure of the body often generates different motion patterns for each body part. In the recent years the computation of dense optical flow has made large progress in terms of accuracy and computation speed. Based on these observations, we approach the pose estimation problem relying on the dense optical flow as a source of information for better pose estimation.
We precompute the dense optical flow between neighboring frames in the sequences, forward and backward in time. We consider the computed flow as an observation, and exploit it in three ways: a) to estimate hands locations b) to propagate good solutions across frames c) to "link" pose hypotheses in adjiacent frames through the flow to jointly evaluate per-frame image likelihoods.
We represent the upper body with the Deformable Structures model and we exploit its region-based body part representation to estimate how the body moves over time. We call the corresponding moving DS models Flowing Puppets.
Video
Members
Publications