57
0

MagicPortrait: Temporally Consistent Face Reenactment with 3D Geometric Guidance

Abstract

In this study, we propose a method for video face reenactment that integrates a 3D face parametric model into a latent diffusion framework, aiming to improve shape consistency and motion control in existing video-based face generation approaches. Our approach employs the FLAME (Faces Learned with an Articulated Model and Expressions) model as the 3D face parametric representation, providing a unified framework for modeling face expressions and head pose. This not only enables precise extraction of motion features from driving videos, but also contributes to the faithful preservation of face shape and geometry. Specifically, we enhance the latent diffusion model with rich 3D expression and detailed pose information by incorporating depth maps, normal maps, and rendering maps derived from FLAME sequences. These maps serve as motion guidance and are encoded into the denoising UNet through a specifically designed Geometric Guidance Encoder (GGE). A multi-layer feature fusion module with integrated self-attention mechanisms is used to combine facial appearance and motion latent features within the spatial domain. By utilizing the 3D face parametric model as motion guidance, our method enables parametric alignment of face identity between the reference image and the motion captured from the driving video. Experimental results on benchmark datasets show that our method excels at generating high-quality face animations with precise expression and head pose variation modeling. In addition, it demonstrates strong generalization performance on out-of-domain images. Code is publicly available atthis https URL.

View on arXiv
@article{wei2025_2504.21497,
  title={ MagicPortrait: Temporally Consistent Face Reenactment with 3D Geometric Guidance },
  author={ Mengting Wei and Yante Li and Tuomas Varanka and Yan Jiang and Guoying Zhao },
  journal={arXiv preprint arXiv:2504.21497},
  year={ 2025 }
}
Comments on this paper