Learning Joint ID-Textual Representation for ID-Preserving Image Synthesis

We propose a novel framework for ID-preserving generation using a multi-modal encoding strategy rather than injecting identity features via adapters into pre-trained models. Our method treats identity and text as a unified conditioning input. To achieve this, we introduce FaceCLIP, a multi-modal encoder that learns a joint embedding space for both identity and textual semantics. Given a reference face and a text prompt, FaceCLIP produces a unified representation that encodes both identity and text, which conditions a base diffusion model to generate images that are identity-consistent and text-aligned. We also present a multi-modal alignment algorithm to train FaceCLIP, using a loss that aligns its joint representation with face, text, and image embedding spaces. We then build FaceCLIP-SDXL, an ID-preserving image synthesis pipeline by integrating FaceCLIP with Stable Diffusion XL (SDXL). Compared to prior methods, FaceCLIP-SDXL enables photorealistic portrait generation with better identity preservation and textual relevance. Extensive experiments demonstrate its quantitative and qualitative superiority.
View on arXiv@article{liu2025_2504.14202, title={ Learning Joint ID-Textual Representation for ID-Preserving Image Synthesis }, author={ Zichuan Liu and Liming Jiang and Qing Yan and Yumin Jia and Hao Kang and Xin Lu }, journal={arXiv preprint arXiv:2504.14202}, year={ 2025 } }