Perceiving Systems Conference Paper 2020

PLACE: Proximity Learning of Articulation and Contact in 3D Environments

Place

High fidelity digital 3D environments have been proposed in recent years, however, it remains extremely challenging to automatically equip such environment with realistic human bodies. Existing work utilizes images, depth or semantic maps to represent the scene, and parametric human models to represent 3D bodies. While being straight-forward, their generated human-scene interactions often lack of naturalness and physical plausibility. Our key observation is that humans interact with the world through body-scene contact. To synthesize realistic human-scene interactions, it is essential to effectively represent the physical contact and proximity between the body and the world. To that end, we propose a novel interaction generation method, named PLACE(Proximity Learning of Articulation and Contact in 3D Environments), which explicitly models the proximity between the human body and the 3D scene around it. Specifically, given a set of basis points on a scene mesh, we leverage a conditional variational autoencoder to synthesize the minimum distances from the basis points to the human body surface. The generated proximal relationship exhibits which region of the scene is in contact with the person. Furthermore, based on such synthesized proximity, we are able to effectively obtain expressive 3D human bodies that interact with the 3D scene naturally. Our perceptual study shows that PLACE significantly improves the state-of-the-art method, approaching the realism of real human-scene interaction. We believe our method makes an important step towards the fully automatic synthesis of realistic 3D human bodies in 3D scenes. The code and model are available for research at https://sanweiliti.github.io/PLACE/PLACE.html

Author(s): Siwei Zhang and Yan Zhang and Qianli Ma and Michael J. Black and Siyu Tang
Book Title: 2020 International Conference on 3D Vision (3DV 2020)
Volume: 1
Pages: 642--651
Year: 2020
Month: November
Publisher: IEEE
Project(s):
Bibtex Type: Conference Paper (inproceedings)
Address: Piscataway, NJ
DOI: 10.1109/3DV50981.2020.00074
Event Name: International Conference on 3D Vision (3DV 2020)
Event Place: Fukuoka
State: Published
Electronic Archiving: grant_archive
ISBN: 978-1-7281-8129-5
Links:

BibTex

@inproceedings{PLACE:3DV:2020,
  title = {{PLACE}: Proximity Learning of Articulation and Contact in {3D} Environments},
  booktitle = {2020 International Conference on 3D Vision (3DV 2020)},
  abstract = {High  fidelity  digital  3D  environments  have  been  proposed in recent years, however, it remains extremely challenging to automatically equip such environment with realistic human bodies. Existing work utilizes images, depth or semantic maps to represent the scene, and parametric human models to represent 3D bodies.  While being straight-forward, their generated human-scene interactions often lack of naturalness and physical plausibility.  Our key observation is that humans interact with the world through body-scene contact. To synthesize realistic human-scene interactions, it is essential to effectively represent the physical contact and proximity between the body and the world. To  that  end,  we  propose  a  novel  interaction  generation method, named PLACE(Proximity Learning of Articulation and Contact in 3D Environments), which explicitly models the  proximity  between  the human  body  and  the  3D  scene around it. Specifically, given a set of basis points on a scene mesh, we leverage a conditional variational autoencoder to synthesize the minimum distances from the basis points to the human body surface.  The generated proximal relationship exhibits which region of the scene is in contact with the person.  Furthermore, based on such synthesized proximity,  we are able to effectively obtain expressive 3D human bodies that interact with the 3D scene naturally.  Our perceptual study shows that PLACE significantly improves the state-of-the-art method, approaching the realism of real
  human-scene interaction. We believe our method makes an important step towards the fully automatic synthesis of realistic 3D human bodies in 3D scenes. The code and model are  available  for  research  at https://sanweiliti.github.io/PLACE/PLACE.html},
  volume = {1},
  pages = {642--651},
  publisher = {IEEE},
  address = {Piscataway, NJ},
  month = nov,
  year = {2020},
  slug = {place-3dv-2020},
  author = {Zhang, Siwei and Zhang, Yan and Ma, Qianli and Black, Michael J. and Tang, Siyu},
  month_numeric = {11}
}