Human Pose, Shape and Action
3D Pose from Images
2D Pose from Images
Beyond Motion Capture
Action and Behavior
Body Perception
Body Applications
Pose and Motion Priors
Clothing Models (2011-2015)
Reflectance Filtering
Learning on Manifolds
Markerless Animal Motion Capture
Multi-Camera Capture
2D Pose from Optical Flow
Body Perception
Neural Prosthetics and Decoding
Part-based Body Models
Intrinsic Depth
Lie Bodies
Layers, Time and Segmentation
Understanding Action Recognition (JHMDB)
Intrinsic Video
Intrinsic Images
Action Recognition with Tracking
Neural Control of Grasping
Flowing Puppets
Faces
Deformable Structures
Model-based Anthropometry
Modeling 3D Human Breathing
Optical flow in the LGN
FlowCap
Smooth Loops from Unconstrained Video
PCA Flow
Efficient and Scalable Inference
Motion Blur in Layers
Facade Segmentation
Smooth Metric Learning
Robust PCA
3D Recognition
Object Detection
Adaptive Locomotion of Soft Microrobots

Soft robots are a class of robots constructed from highly deformable materials, inspired by the bodies of living organisms. In [], Palagi et al. have introduced soft microrobots based on stimuli-responsive polymers. Projecting a light field onto the surface heats part of the microrobot's body therefore creating a local deformation. With the correct light fields, the microrobots are able to produce periodic deformations that can be used to create locomotion. By changing the light field, the corresponding deformation of the microrobot changes and thereby creates a different kind of movement, which we call locomotion patterns [
]. However, the deformations and the resulting locomotion behavior are hard to model. Therefore, designing a suitable controller for the desired locomotion is challenging. In this project, we circumvent the need for exact physics-based models by employing machine learning. We use data gathered during experiments to automatically learn a suitable controller. This is one of the first works that uses data driven methods on robotic agents at the micrometer scale.
In [], we propose learning locomotion patterns using Gaussian Processes and Bayesian Optimization (see figure). The resulting learning scheme evaluates a given controller based on experimental data and uses a Gaussian Process to model the controller performance. In the next step, a new set of controller parameter is chosen automatically and tested in the next experiment. We showed that the proposed learning scheme is able to improve the locomotion performance in a data-efficient way. A semi-synthetic benchmark (obtained made from previously collected data) is used to compare different settings of the learning scheme. The best parameters found by the benchmark were validated experimentally, and the locomotion speed improved by more than a factor of two (115%) with only 20 experiments.
Future work aims to extend the learning scheme such that the controller can adapt to changing environments by learning spatio-temporal correlations, as well as a systematic way of excluding old and unimportant data from the learning process.
Members
Publications