Identity-preserving face synthesis aims to generate synthetic face images of virtual subjects that can substitute real-world data for training face recognition models. While prior arts strive to create images with consistent identities and diverse styles, they face a trade-off between them. Identifying their limitation of treating style variation as subject-agnostic and observing that real-world persons actually have distinct, subject-specific styles, this paper introduces MorphFace, a diffusion-based face generator. The generator learns fine-grained facial styles, e.g., shape, pose and expression, from the renderings of a 3D morphable model (3DMM). It also learns identities from an off-the-shelf recognition model. To create virtual faces, the generator is conditioned on novel identities of unlabeled synthetic faces, and novel styles that are statistically sampled from a real-world prior distribution. The sampling especially accounts for both intra-subject variation and subject distinctiveness. A context blending strategy is employed to enhance the generator's responsiveness to identity and style conditions. Extensive experiments show that MorphFace outperforms the best prior arts in face recognition efficacy.
View on arXiv@article{mi2025_2504.00430, title={ Data Synthesis with Diverse Styles for Face Recognition via 3DMM-Guided Diffusion }, author={ Yuxi Mi and Zhizhou Zhong and Yuge Huang and Qiuyang Yuan and Xuan Zhao and Jianqing Xu and Shouhong Ding and ShaoMing Wang and Rizen Guo and Shuigeng Zhou }, journal={arXiv preprint arXiv:2504.00430}, year={ 2025 } }