Back
3D face reconstruction under occlusions is highly challenging because of the large variability of the appearance and location of occluders. Currently, the most successful methods fit a 3D face model through inverse rendering and assume a given segmentation of the occluder to avoid fitting the occluder. However, the segmentation annotations are costly since training an occlusion segmentation model requires large amounts of annotated data. To overcome this, we introduce a model-based approach for 3D face reconstruction that is highly robust to occlusions but does not require any occlusion annotations for training. In our approach, we exploit the fact that generative face models can only synthesize human faces, but not the occluders. We use this property to guide the decision-making process of an occlusion segmentation network and resulting in unsupervised training. The main challenge is that the model fitting and the occlusion segmentation are mutually dependent on each other, and need to be inferred jointly. We resolve this chicken-and-egg problem with an EM-type training strategy. This leads to a synergistic effect, in which the segmentation network prevents the face encoder from fitting to the occlusion, enhancing the reconstruction quality. The improved 3D face reconstruction, in turn, enables the segmentation network to better predict the occlusion. Qualitative and quantitative experiments demonstrate that the proposed pipeline achieves the state-of-the-art 3D face reconstruction under occlusion. Moreover, the segmentation network localizes occlusions accurately despite being trained without any occlusion annotation.
Chunlu Li (Southeast University)
PhD
Chunlu Li received her B.S. degree at Southeast University. Currently, she is a PhD student at Southeast University, advised by Prof. Feipeng Da. and an exchange student in the Graphics and Vision Research Group (GRAVIS) at the University of Base. Her research mainly focuses on face shape modelling and robust model fitting under complex environments.