Perceiving Systems Conference Paper 2015

Pose-Conditioned Joint Angle Limits for 3D Human Pose Reconstruction

Ijazteaser

The estimation of 3D human pose from 2D joint locations is central to many vision problems involving the analysis of people in images and video. To address the fact that the problem is inherently ill posed, many methods impose a prior over human poses. Unfortunately these priors admit invalid poses because they do not model how joint-limits vary with pose. Here we make two key contributions. First, we collected a motion capture dataset that explores a wide range of human poses. From this we learn a pose-dependent model of joint limits that forms our prior. The dataset and the prior will be made publicly available. Second, we define a general parameterization of body pose and a new, multistage, method to estimate 3D pose from 2D joint locations that uses an over-complete dictionary of human poses. Our method shows good generalization while avoiding impossible poses. We quantitatively compare our method with recent work and show state-of-the-art results on 2D to 3D pose estimation using the CMU mocap dataset. We also show superior results on manual annotations on real images and automatic part-based detections on the Leeds sports pose dataset.

Author(s): Ijaz Akhter and Michael J. Black
Book Title: IEEE Conf. on Computer Vision and Pattern Recognition (CVPR 2015)
Pages: 1446--1455
Year: 2015
Month: June
Project(s):
Bibtex Type: Conference Paper (inproceedings)
DOI: 10.1109/CVPR.2015.7298751
Event Place: Boston, MA, USA
Electronic Archiving: grant_archive
Links:

BibTex

@inproceedings{Akhter:CVPR:2015,
  title = {Pose-Conditioned Joint Angle Limits for {3D} Human Pose Reconstruction},
  booktitle = { IEEE Conf.~on Computer Vision and Pattern Recognition (CVPR 2015)},
  abstract = {The estimation of 3D human pose from 2D joint locations is central to many vision problems involving the analysis of people in images and video. To address the fact that the problem is inherently ill posed, many methods impose a prior over human poses. Unfortunately these priors admit invalid poses because they do not model how joint-limits vary with pose. Here we make two key contributions. First, we collected a motion capture dataset that explores a wide range of human poses. From this we learn a pose-dependent model of joint limits that forms our prior. The dataset and the prior will be made publicly available. Second, we define a general parameterization of body pose and a new, multistage, method to estimate 3D pose from 2D joint locations that uses an over-complete dictionary of human poses. Our method shows good generalization while avoiding impossible poses. We quantitatively compare our method with recent work and show state-of-the-art results on 2D to 3D pose estimation using the CMU mocap dataset. We also show superior results on manual annotations on real images and automatic part-based detections on the Leeds sports pose dataset.},
  pages = {1446--1455},
  month = jun,
  year = {2015},
  slug = {akhter-cvpr-2015},
  author = {Akhter, Ijaz and Black, Michael J.},
  month_numeric = {6}
}