283

To fit or not to fit: Model-based Face Reconstruction and Occlusion Segmentation from Weak Supervision

Computer Vision and Pattern Recognition (CVPR), 2021
Abstract

3D face reconstruction under occlusions is highly challenging due to the large variability of occluders. Currently, the most successful methods fit a 3D face model through inverse rendering and assume a given segmentation of the occluder to avoid fitting the occluder. However, training an occlusion segmentation model requires large amounts of annotated data. In this work, we introduce a model-based approach for 3D face reconstruction that is highly robust to occlusions but does not require any occlusion annotations for training. In our approach, we exploit the fact that generative face models can only synthesize human faces, but not the occluders. We use this property to guide the decision-making process of an occlusion segmentation network and resulting in unsupervised training. The main challenge is that the model fitting and the occlusion segmentation are mutually dependent on each other, and need to be inferred jointly. We resolve this chicken-and-egg problem with an EM-type training strategy. This leads to a synergistic effect, in which the segmentation network prevents the face encoder from fitting to the occlusion, enhancing the reconstruction quality. The improved 3D face reconstruction, in turn, enables the segmentation network to better predict the occlusion. Qualitative and quantitative experiments on the CelebA-HQ, the AR databases, and the NoW challenge demonstrate that the proposed pipeline achieves the state-of-the-art 3D face reconstruction under occlusion. Moreover, the segmentation network localizes occlusions accurately despite being trained without any occlusion annotation. The code is available at https://github.com/unibas-gravis/Occlusion-Robust-MoFA.

View on arXiv
Comments on this paper