ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.06505
55
0

DynamicID: Zero-Shot Multi-ID Image Personalization with Flexible Facial Editability

9 March 2025
Xirui Hu
Jiahao Wang
Hao Chen
Weizhan Zhang
Benqi Wang
Y. Li
Haishun Nan
    DiffM
ArXivPDFHTML
Abstract

Recent advancements in text-to-image generation have spurred interest in personalized human image generation, which aims to create novel images featuring specific human identities as reference images indicate. Although existing methods achieve high-fidelity identity preservation, they often struggle with limited multi-ID usability and inadequate facial editability. We present DynamicID, a tuning-free framework supported by a dual-stage training paradigm that inherently facilitates both single-ID and multi-ID personalized generation with high fidelity and flexible facial editability. Our key innovations include: 1) Semantic-Activated Attention (SAA), which employs query-level activation gating to minimize disruption to the original model when injecting ID features and achieve multi-ID personalization without requiring multi-ID samples during training. 2) Identity-Motion Reconfigurator (IMR), which leverages contrastive learning to effectively disentangle and re-entangle facial motion and identity features, thereby enabling flexible facial editing. Additionally, we have developed a curated VariFace-10k facial dataset, comprising 10k unique individuals, each represented by 35 distinct facial images. Experimental results demonstrate that DynamicID outperforms state-of-the-art methods in identity fidelity, facial editability, and multi-ID personalization capability.

View on arXiv
@article{hu2025_2503.06505,
  title={ DynamicID: Zero-Shot Multi-ID Image Personalization with Flexible Facial Editability },
  author={ Xirui Hu and Jiahao Wang and Hao Chen and Weizhan Zhang and Benqi Wang and Yikun Li and Haishun Nan },
  journal={arXiv preprint arXiv:2503.06505},
  year={ 2025 }
}
Comments on this paper