Perceiving Systems Conference Paper 2021

LEAP: Learning Articulated Occupancy of People

Leap3

Substantial progress has been made on modeling rigid 3D objects using deep implicit representations. Yet, extending these methods to learn neural models of human shape is still in its infancy. Human bodies are complex and the key challenge is to learn a representation that generalizes such that it can express body shape deformations for unseen subjects in unseen, highly-articulated, poses. To address this challenge, we introduce LEAP (LEarning Articulated occupancy of People), a novel neural occupancy representation of the human body. Given a set of bone transformations (i.e. joint locations and rotations) and a query point in space, LEAP first maps the query point to a canonical space via learned linear blend skinning (LBS) functions and then efficiently queries the occupancy value via an occupancy network that models accurate identity- and pose- dependent deformations in the canonical space. Experiments show that our canonicalized occupancy estimation with the learned LBS functions greatly improves the generalization capability of the learned occupancy representation across various human shapes and poses, outperforming existing solutions in all settings.

Author(s): Marko Mihajlovic and Yan Zhang and Michael J. Black and Siyu Tang
Book Title: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021)
Pages: 10456--10466
Year: 2021
Month: June
Publisher: IEEE
Project(s):
Bibtex Type: Conference Paper (inproceedings)
Address: Piscataway, NJ
DOI: 10.1109/CVPR46437.2021.01032
Event Name: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021)
Event Place: Virtual
State: Published
Electronic Archiving: grant_archive
ISBN: 978-1-6654-4509-2
Links:

BibTex

@inproceedings{LEAP:CVPR:2021,
  title = {{LEAP}: Learning Articulated Occupancy of People},
  booktitle = {2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021)},
  abstract = {Substantial progress has been made on modeling rigid 3D objects using deep implicit representations. Yet, extending these methods to learn neural models of human shape is still in its infancy. Human bodies are complex and the key challenge is to learn a representation that generalizes such that it can express body shape deformations for unseen subjects in unseen, highly-articulated, poses. To address this challenge, we introduce LEAP (LEarning Articulated occupancy of People), a novel neural occupancy representation of the human body. Given a set of bone transformations (i.e. joint locations and rotations) and a query point in space, LEAP first maps the query point to a canonical space via learned linear blend skinning (LBS) functions and then efficiently queries the occupancy value via an occupancy network that models accurate identity- and pose- dependent deformations in the canonical space. Experiments show that our canonicalized occupancy estimation with the learned LBS functions greatly improves the generalization capability of the learned occupancy representation across various human shapes and poses, outperforming existing solutions in all settings.
  },
  pages = {10456--10466},
  publisher = {IEEE},
  address = {Piscataway, NJ},
  month = jun,
  year = {2021},
  slug = {leap-cvpr-2021},
  author = {Mihajlovic, Marko and Zhang, Yan and Black, Michael J. and Tang, Siyu},
  month_numeric = {6}
}