Perzeptive Systeme Article 2017

ClothCap: Seamless 4D Clothing Capture and Retargeting

Web image

Designing and simulating realistic clothing is challenging and, while several methods have addressed the capture of clothing from 3D scans, previous methods have been limited to single garments and simple motions, lack detail, or require specialized texture patterns. Here we address the problem of capturing regular clothing on fully dressed people in motion. People typically wear multiple pieces of clothing at a time. To estimate the shape of such clothing, track it over time, and render it believably, each garment must be segmented from the others and the body. Our ClothCap approach uses a new multi-part 3D model of clothed bodies, automatically segments each piece of clothing, estimates the naked body shape and pose under the clothing, and tracks the 3D deformations of the clothing over time. We estimate the garments and their motion from 4D scans; that is, high-resolution 3D scans of the subject in motion at 60 fps. The model allows us to capture a clothed person in motion, extract their clothing, and retarget the clothing to new body shapes. ClothCap provides a step towards virtual try-on with a technology for capturing, modeling, and analyzing clothing in motion.

Author(s): Gerard Pons-Moll and Sergi Pujades and Sonny Hu and Michael Black
Journal: ACM Transactions on Graphics, (Proc. SIGGRAPH)
Volume: 36
Number (issue): 4
Pages: 73:1--73:15
Year: 2017
Publisher: ACM
Project(s):
Bibtex Type: Article (article)
DOI: 10.1145/3072959.3073711
URL: http://dx.doi.org/10.1145/3072959.3073711
Address: New York, NY, USA
Electronic Archiving: grant_archive
Note: Two first authors contributed equally
Links:
Attachments:

BibTex

@article{Pons-Moll:Siggraph2017,
  title = {{ClothCap}: Seamless {4D} Clothing Capture and Retargeting},
  journal = {ACM Transactions on Graphics, (Proc. SIGGRAPH)},
  abstract = {Designing and simulating realistic clothing is challenging and, while several methods have addressed the capture of clothing from 3D scans, previous methods have been limited to single garments and simple motions, lack detail, or require specialized texture patterns. Here we address the problem of capturing regular clothing on fully dressed people in motion. People typically wear multiple pieces of clothing at a time. To estimate the shape of such clothing, track it over time, and render it believably, each garment must be segmented from the others and the body. Our ClothCap approach uses a new multi-part 3D model of clothed bodies, automatically segments each piece of clothing, estimates the naked body shape and pose under the clothing, and tracks the 3D deformations of the clothing over time. We estimate the garments and their motion from 4D scans; that is, high-resolution 3D scans of the subject in motion at 60 fps. The model allows us to capture a clothed person in motion, extract their clothing, and retarget the clothing to new body shapes. ClothCap provides a step towards virtual try-on with a technology for capturing, modeling, and analyzing clothing in motion.
  },
  volume = {36},
  number = {4},
  pages = {73:1--73:15},
  publisher = {ACM},
  address = {New York, NY, USA},
  year = {2017},
  note = {Two first authors contributed equally},
  slug = {pons-moll-siggraph2017},
  author = {Pons-Moll, Gerard and Pujades, Sergi and Hu, Sonny and Black, Michael},
  url = {http://dx.doi.org/10.1145/3072959.3073711}
}