Perceiving Systems Conference Paper 2017

Deep representation learning for human motion prediction and classification

Judith

Generative models of 3D human motion are often restricted to a small number of activities and can therefore not generalize well to novel movements or applications. In this work we propose a deep learning framework for human motion capture data that learns a generic representation from a large corpus of motion capture data and generalizes well to new, unseen, motions. Using an encoding-decoding network that learns to predict future 3D poses from the most recent past, we extract a feature representation of human motion. Most work on deep learning for sequence prediction focuses on video and speech. Since skeletal data has a different structure, we present and evaluate different network architectures that make different assumptions about time dependencies and limb correlations. To quantify the learned features, we use the output of different layers for action classification and visualize the receptive fields of the network units. Our method outperforms the recent state of the art in skeletal motion prediction even though these use action specific training data. Our results show that deep feedforward networks, trained from a generic mocap database, can successfully be used for feature extraction from human motion data and that this representation can be used as a foundation for classification and prediction.

Author(s): Judith Bütepage and Michael Black and Danica Kragic and Hedvig Kjellström
Book Title: Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
Pages: 1591-1599
Year: 2017
Month: July
Day: 21-26
Publisher: IEEE
Project(s):
Bibtex Type: Conference Paper (inproceedings)
Address: Piscataway, NJ, USA
Event Name: IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Event Place: Honolulu, HI, USA
Electronic Archiving: grant_archive
ISBN: 978-1-5386-0457-1
ISSN: 1063-6919
Links:

BibTex

@inproceedings{Butepage:CVPR:2017,
  title = {Deep representation learning for human motion prediction and classification},
  booktitle = {Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017},
  abstract = {Generative models of 3D human motion are often restricted to a small number of activities and can therefore not generalize well to novel movements or applications. In this work we propose a deep learning framework for human motion capture data that learns a generic representation from a large corpus of motion capture data and generalizes well to new, unseen, motions. Using an encoding-decoding network that learns to predict future 3D poses from the most recent past, we extract a feature representation of human motion. Most work on deep learning for sequence prediction focuses on video and speech. Since skeletal data has a different structure, we present and evaluate different network architectures that make different assumptions about time dependencies and limb correlations. To quantify the learned features, we use the output of different layers for action classification and visualize the receptive fields of the network units. Our method outperforms the recent state of the art in skeletal motion prediction even though these use action specific training data. Our results show that deep feedforward networks, trained from a generic mocap database, can successfully be used for feature extraction from human motion data and that this representation can be used as a foundation for classification and prediction.},
  pages = {1591-1599},
  publisher = {IEEE},
  address = {Piscataway, NJ, USA},
  month = jul,
  year = {2017},
  slug = {butepage-cvpr-2017},
  author = {B{\"u}tepage, Judith and Black, Michael and Kragic, Danica and Kjellstr{\"o}m, Hedvig},
  month_numeric = {7}
}