Perzeptive Systeme Conference Paper 2017

A Generative Model of People in Clothing

Teasercrop

We present the first image-based generative model of people in clothing in a full-body setting. We sidestep the commonly used complex graphics rendering pipeline and the need for high-quality 3D scans of dressed people. Instead, we learn generative models from a large image database. The main challenge is to cope with the high variance in human pose, shape and appearance. For this reason, pure image-based approaches have not been considered so far. We show that this challenge can be overcome by splitting the generating process in two parts. First, we learn to generate a semantic segmentation of the body and clothing. Second, we learn a conditional model on the resulting segments that creates realistic images. The full model is differentiable and can be conditioned on pose, shape or color. The result are samples of people in different clothing items and styles. The proposed model can generate entirely new people with realistic clothing. In several experiments we present encouraging results that suggest an entirely data-driven approach to people generation is possible.

Author(s): Christoph Lassner and Gerard Pons-Moll and Peter V. Gehler
Book Title: Proceedings IEEE International Conference on Computer Vision (ICCV)
Pages: 853-862
Year: 2017
Month: October
Day: 22-29
Publisher: IEEE
Project(s):
Bibtex Type: Conference Paper (inproceedings)
Address: Piscataway, NJ, USA
Event Name: IEEE International Conference on Computer Vision (ICCV)
Event Place: Venice, Italy
URL: http://files.is.tuebingen.mpg.de/classner/gp/
Electronic Archiving: grant_archive
ISBN: 978-1-5386-1032-9
ISSN: 2380-7504

BibTex

@inproceedings{Lassner:GP:2017,
  title = {A Generative Model of People in Clothing},
  booktitle = {Proceedings IEEE International Conference on Computer Vision (ICCV)},
  abstract = {We present the first image-based generative model of people in clothing in a full-body setting. We sidestep the commonly used complex graphics rendering pipeline and the need for high-quality 3D scans of dressed people. Instead, we learn generative models from a large image database. The main challenge is to cope with the high variance in human pose, shape and appearance. For this reason, pure image-based approaches have not been considered so far. We show that this challenge can be overcome by splitting the generating process in two parts. First, we learn to generate a semantic segmentation of the body and clothing. Second, we learn a conditional model on the resulting segments that creates realistic images. The full model is differentiable and can be conditioned on pose, shape or color. The result are samples of people in different clothing items and styles. The proposed model can generate entirely new people with realistic clothing. In several experiments we present encouraging results that suggest an entirely data-driven approach to people generation is possible.},
  pages = {853-862},
  publisher = {IEEE},
  address = {Piscataway, NJ, USA},
  month = oct,
  year = {2017},
  slug = {lassner-gp-2017},
  author = {Lassner, Christoph and Pons-Moll, Gerard and Gehler, Peter V.},
  url = {http://files.is.tuebingen.mpg.de/classner/gp/},
  month_numeric = {10}
}