Perzeptive Systeme Conference Paper 2021

Action-Conditioned 3D Human Motion Synthesis with Transformer VAE

Actor

We tackle the problem of action-conditioned generation of realistic and diverse human motion sequences. In contrast to methods that complete, or extend, motion sequences, this task does not require an initial pose or sequence. Here we learn an action-aware latent representation for human motions by training a generative variational autoencoder (VAE). By sampling from this latent space and querying a certain duration through a series of positional encodings, we synthesize variable-length motion sequences conditioned on a categorical action. Specifically, we design a Transformer-based architecture, ACTOR, for encoding and decoding a sequence of parametric SMPL human body models estimated from action recognition datasets. We evaluate our approach on the NTU RGB+D, HumanAct12 and UESTC datasets and show improvements over the state of the art. Furthermore, we present two use cases: improving action recognition through adding our synthesized data to training, and motion denoising. Code and models are available on our project page.

Author(s): Mathis Petrovich and Michael J. Black and Gül Varol
Book Title: Proc. International Conference on Computer Vision (ICCV)
Pages: 10965--10975
Year: 2021
Month: October
Publisher: IEEE
Project(s):
Bibtex Type: Conference Paper (inproceedings)
Address: Piscataway, NJ
DOI: 10.1109/ICCV48922.2021.01080
Event Name: International Conference on Computer Vision 2021
Event Place: virtual (originally Montreal, Canada)
State: Published
Electronic Archiving: grant_archive
ISBN: 978-1-6654-2812-5
Links:

BibTex

@inproceedings{ACTOR:ICCV:2021,
  title = {Action-Conditioned {3D} Human Motion Synthesis with Transformer {VAE}},
  booktitle = {Proc. International Conference on Computer Vision (ICCV)},
  abstract = {We tackle the problem of action-conditioned generation of realistic and diverse human motion sequences. In contrast to methods that complete, or extend, motion sequences, this task does not require an initial pose or sequence. Here we learn an action-aware latent representation for human motions by training a generative variational autoencoder (VAE). By sampling from this latent space and querying a certain duration through a series of positional encodings, we synthesize variable-length motion sequences conditioned on a categorical action. Specifically, we design a Transformer-based architecture, ACTOR, for encoding and decoding a sequence of parametric SMPL human body models estimated from action recognition datasets. We evaluate our approach on the NTU RGB+D, HumanAct12 and UESTC datasets and show improvements over the state of the art. Furthermore, we present two use cases: improving action recognition through adding our synthesized data to training, and motion denoising. Code and models are available on our project page.},
  pages = {10965--10975},
  publisher = {IEEE},
  address = {Piscataway, NJ},
  month = oct,
  year = {2021},
  slug = {actor-iccv-2021},
  author = {Petrovich, Mathis and Black, Michael J. and Varol, G\"{u}l},
  month_numeric = {10}
}