POSA: Populating 3D Scenes by Learning Human-Scene Interaction

POSA takes a 3D body and automatically places it in a 3D scene in a semantically meaningful way. This repository contains the training, random sampling, and scene population code used for the experiments in POSA. The code defines a novel representation of human-scene-interaction that is body centric. This can be exploited for 3D human tracking from video to model likely interactions between a body and the scene.
POSA takes a 3D body and automatically places it in a 3D scene in a semantically meaningful way.
This repository contains the training, random sampling, and scene population code used for the experiments in POSA.
The code defines a novel representation of human-scene-interaction that is body centric.
This can be exploited for 3D human tracking from video to model likely interactions between a body and the scene.
Release Date: | 28 April 2021 |
licence_type: | PS:License 1.0 |
Authors: | Hassan, Mohamed, Ghosh, Partha, Tesch, Joachim, Tzionas, Dimitrios, Black, Michael J. |
Repository: | https://github.com/mohamedhassanmus/POSA |