22
0

Low-Rank Continual Personalization of Diffusion Models

Abstract

Recent personalization methods for diffusion models, such as Dreambooth and LoRA, allow fine-tuning pre-trained models to generate new concepts. However, applying these techniques across consecutive tasks in order to include, e.g., new objects or styles, leads to a forgetting of previous knowledge due to mutual interference between their adapters. In this work, we tackle the problem of continual customization under a rigorous regime with no access to past tasks' adapters. In such a scenario, we investigate how different adapters' initialization and merging methods can improve the quality of the final model. To that end, we evaluate the naive continual fine-tuning of customized models and compare this approach with three methods for consecutive adapters' training: sequentially merging new adapters, merging orthogonally initialized adapters, and updating only relevant task-specific weights. In our experiments, we show that the proposed techniques mitigate forgetting when compared to the naive approach. In our studies, we show different traits of selected techniques and their effect on the plasticity and stability of the continually adapted model. Repository with the code is available atthis https URL.

View on arXiv
@article{staniszewski2025_2410.04891,
  title={ Low-Rank Continual Personalization of Diffusion Models },
  author={ Łukasz Staniszewski and Katarzyna Zaleska and Kamil Deja },
  journal={arXiv preprint arXiv:2410.04891},
  year={ 2025 }
}
Comments on this paper