43
18

Unlearning Concepts in Diffusion Model via Concept Domain Correction and Concept Preserving Gradient

Abstract

Text-to-image diffusion models have achieved remarkable success in generating photorealistic images. However, the inclusion of sensitive information during pre-training poses significant risks. Machine Unlearning (MU) offers a promising solution to eliminate sensitive concepts from these models. Despite its potential, existing MU methods face two main challenges: 1) limited generalization, where concept erasure is effective only within the unlearned set, failing to prevent sensitive concept generation from out-of-set prompts; and 2) utility degradation, where removing target concepts significantly impacts the model's overall performance. To address these issues, we propose a novel concept domain correction framework named \textbf{DoCo} (\textbf{Do}main \textbf{Co}rrection). By aligning the output domains of sensitive and anchor concepts through adversarial training, our approach ensures comprehensive unlearning of target concepts. Additionally, we introduce a concept-preserving gradient surgery technique that mitigates conflicting gradient components, thereby preserving the model's utility while unlearning specific concepts. Extensive experiments across various instances, styles, and offensive concepts demonstrate the effectiveness of our method in unlearning targeted concepts with minimal impact on related concepts, outperforming previous approaches even for out-of-distribution prompts.

View on arXiv
@article{wu2025_2405.15304,
  title={ Unlearning Concepts in Diffusion Model via Concept Domain Correction and Concept Preserving Gradient },
  author={ Yongliang Wu and Shiji Zhou and Mingzhuo Yang and Lianzhe Wang and Heng Chang and Wenbo Zhu and Xinting Hu and Xiao Zhou and Xu Yang },
  journal={arXiv preprint arXiv:2405.15304},
  year={ 2025 }
}
Comments on this paper