Human Pose, Shape and Action
3D Pose from Images
2D Pose from Images
Beyond Motion Capture
Action and Behavior
Body Perception
Body Applications
Pose and Motion Priors
Clothing Models (2011-2015)
Reflectance Filtering
Learning on Manifolds
Markerless Animal Motion Capture
Multi-Camera Capture
2D Pose from Optical Flow
Body Perception
Neural Prosthetics and Decoding
Part-based Body Models
Intrinsic Depth
Lie Bodies
Layers, Time and Segmentation
Understanding Action Recognition (JHMDB)
Intrinsic Video
Intrinsic Images
Action Recognition with Tracking
Neural Control of Grasping
Flowing Puppets
Faces
Deformable Structures
Model-based Anthropometry
Modeling 3D Human Breathing
Optical flow in the LGN
FlowCap
Smooth Loops from Unconstrained Video
PCA Flow
Efficient and Scalable Inference
Motion Blur in Layers
Facade Segmentation
Smooth Metric Learning
Robust PCA
3D Recognition
Object Detection
Human pose, shape, and motion capture

The Perceiving Systems department has pioneered the field of 3D human pose and shape (HPS) estimation and has driven global development of the field through the release of models, code, data, and benchmarks. Our SMPL body model and related models of the face (FLAME), hands (MANO), and full body (SMPL-X) are de facto standards in academia and industry and have enabled rapid advances in 3D human understanding and its applications. We introduced the first methods to estimate parametric body shape and pose from from images (HMR and SMPLify). While these remain core methods in the field, we, and the field, have progressed dramatically. We believe that we are close to "solving" this problem and we focus on the core remaining challenges:
Multi-person estimation: Traditional methods estimate the pose of a person in a tightly cropped image region. This prevents the network from exploiting image context and prevents reasoning about person-person interaction. BEV [] reasons about the whole image and locates people in 3D using an imaginary "birds eye view" representation, while TRACE [
] extends this idea over time to infer human movement in 5D, where the 5th dimension is subject identity. Humans are often in contact and, to model this, we developed a novel denoising diffusion model called BUDDI [
] that represents the joint distribution over the poses of two people in close social interaction. We leverage BUDDI to infer 3D humans and their close interactions from images.
Cameras and world coordinates: While most methods estimate the 3D body in camera coordinates using a simplified camera model, many applications require estimates in world coordinates. This requires estimating the intrinsic and extrinsic camera parameters. Unfortunately, traditional methods for this often fail when presented with images of humans. To address this, we trained HumanFoV to regress the field of view of a camera from an image of a person and we exploit this to train the new state-of-the-art single-image HPS regressor called CameraHMR []. For video sequences, WHAM [
] exploits the AMASS dataset and image features to infer the camera's angular velocity, the 3D pose of the person, and their foot contact within the scene, resulting in the most accurate video-based HPS regression method.
The estimation of metrically accurate 3D human shape and motion from arbitrary videos is our north star as it is key to training computers to understand humans and their behavior.