ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.08575
50
0

Modular Customization of Diffusion Models via Blockwise-Parameterized Low-Rank Adaptation

11 March 2025
Mingkang Zhu
Xi Chen
Z. Wang
Bei Yu
Hengshuang Zhao
Jiaya Jia
    MoMe
ArXivPDFHTML
Abstract

Recent diffusion model customization has shown impressive results in incorporating subject or style concepts with a handful of images. However, the modular composition of multiple concepts into a customized model, aimed to efficiently merge decentralized-trained concepts without influencing their identities, remains unresolved. Modular customization is essential for applications like concept stylization and multi-concept customization using concepts trained by different users. Existing post-training methods are only confined to a fixed set of concepts, and any different combinations require a new round of retraining. In contrast, instant merging methods often cause identity loss and interference of individual merged concepts and are usually limited to a small number of concepts. To address these issues, we propose BlockLoRA, an instant merging method designed to efficiently combine multiple concepts while accurately preserving individual concepts' identity. With a careful analysis of the underlying reason for interference, we develop the Randomized Output Erasure technique to minimize the interference of different customized models. Additionally, Blockwise LoRA Parameterization is proposed to reduce the identity loss during instant model merging. Extensive experiments validate the effectiveness of BlockLoRA, which can instantly merge 15 concepts of people, subjects, scenes, and styles with high fidelity.

View on arXiv
@article{zhu2025_2503.08575,
  title={ Modular Customization of Diffusion Models via Blockwise-Parameterized Low-Rank Adaptation },
  author={ Mingkang Zhu and Xi Chen and Zhongdao Wang and Bei Yu and Hengshuang Zhao and Jiaya Jia },
  journal={arXiv preprint arXiv:2503.08575},
  year={ 2025 }
}
Comments on this paper