Back
In this work, we aim at synthesizing a free-viewpoint video of an arbitrary human performance using sparse multi-view cameras. Recently, several works have addressed this problem by learning person-specific neural radiance fields (NeRF) to capture the appearance of a particular human, In parallel, some work proposed to use pixel-aligned features to generalize radiance fields to arbitrary new scenes and objects. Adopting such generalization approaches to humans, however, is highly challenging due to the heavy occlusions and dynamic articulations of body parts. To tackle this, we propose a novel approach that learns generalizable neural radiance fields based on a parametric human body model for robust performance capture.
Youngjoong Kwon (University of North Carolina at Chapel Hill)
Ph.D. student
Youngjoong Kwon is a Ph.D. student under the supervision of Prof. Henry Fuchs in the Department of Computer Science at the University of North Carolina at Chapel Hill. Her Ph.D. work focuses on using machine learning techniques to render and reconstruct virtual humans. Previously, she interned at Adobe Research. She is currently a research intern at Max Planck Institute for Informatics.