7
0

AlignGen: Boosting Personalized Image Generation with Cross-Modality Prior Alignment

Abstract

Personalized image generation aims to integrate user-provided concepts into text-to-image models, enabling the generation of customized content based on a given prompt. Recent zero-shot approaches, particularly those leveraging diffusion transformers, incorporate reference image information through multi-modal attention mechanism. This integration allows the generated output to be influenced by both the textual prior from the prompt and the visual prior from the reference image. However, we observe that when the prompt and reference image are misaligned, the generated results exhibit a stronger bias toward the textual prior, leading to a significant loss of reference content. To address this issue, we propose AlignGen, a Cross-Modality Prior Alignment mechanism that enhances personalized image generation by: 1) introducing a learnable token to bridge the gap between the textual and visual priors, 2) incorporating a robust training strategy to ensure proper prior alignment, and 3) employing a selective cross-modal attention mask within the multi-modal attention mechanism to further align the priors. Experimental results demonstrate that AlignGen outperforms existing zero-shot methods and even surpasses popular test-time optimization approaches.

View on arXiv
@article{lin2025_2505.21911,
  title={ AlignGen: Boosting Personalized Image Generation with Cross-Modality Prior Alignment },
  author={ Yiheng Lin and Shifang Zhao and Ting Liu and Xiaochao Qu and Luoqi Liu and Yao Zhao and Yunchao Wei },
  journal={arXiv preprint arXiv:2505.21911},
  year={ 2025 }
}
Comments on this paper