Perceiving Systems Conference Paper 2024

MotionFix: Text-Driven 3D Human Motion Editing

Image 900x600

The focus of this paper is 3D motion editing. Given a 3D human motion and a textual description of the desired modification, our goal is to generate an edited motion as described by the text. The challenges include the lack of training data and the design of a model that faithfully edits the source motion. In this paper, we address both these challenges. We build a methodology to semi-automatically collect a dataset of triplets in the form of (i) a source motion, (ii) a target motion, and (iii) an edit text, and create the new dataset. Having access to such data allows us to train a conditional diffusion model that takes both the source motion and the edit text as input. We further build various baselines trained only on text-motion pairs datasets and show superior performance of our model trained on triplets. We introduce new retrieval-based metrics for motion editing and establish a new benchmark on the evaluation set. Our results are encouraging, paving the way for further research on fine-grained motion generation. Code and models will be made publicly available.

Author(s): Nikos Athanasiou and Alpár Cseke and Markos Diomataris and Michael J. Black and Gül Varol
Book Title: SIGGRAPH Asia 2024 Conference Proceedings
Year: 2024
Month: December
Day: 5
Publisher: ACM
Project(s):
Bibtex Type: Conference Paper (inproceedings)
Event Name: SIGGRAPH Asia
Event Place: Tokyo, Japan
State: Published
URL: https://motionfix.is.tue.mpg.de/
Electronic Archiving: grant_archive
Links:

BibTex

@inproceedings{athanasiou-motionfix-24,
  title = {{MotionFix}: Text-Driven {3D} Human Motion Editing},
  booktitle = {SIGGRAPH Asia 2024 Conference Proceedings},
  abstract = {The focus of this paper is 3D motion editing. Given a 3D human motion and a textual description of the desired modification, our goal is to generate an edited motion as described by the text. The challenges include the lack of training data and the design of a model that faithfully edits the source motion. In this paper, we address both these challenges. We build a methodology to semi-automatically collect a dataset of triplets in the form of (i) a source motion, (ii) a target motion, and (iii) an edit text, and create the new dataset. Having access to such data allows us to train a conditional diffusion model that takes both the source motion and the edit text as input. We further build various baselines trained only on text-motion pairs datasets and show superior performance of our model trained on triplets. We introduce new retrieval-based metrics for motion editing and establish a new benchmark on the evaluation set. Our results are encouraging, paving the way for further research on fine-grained motion generation. Code and models will be made publicly available.},
  publisher = {ACM},
  month = dec,
  year = {2024},
  slug = {athanasiou-motionfix-24},
  author = {Athanasiou, Nikos and Cseke, Alpár and Diomataris, Markos and Black, Michael J. and Varol, G{\"u}l},
  url = {https://motionfix.is.tue.mpg.de/},
  month_numeric = {12}
}