20
0

Hiding and Recovering Knowledge in Text-to-Image Diffusion Models via Learnable Prompts

Abstract

Diffusion models have demonstrated remarkable capability in generating high-quality visual content from textual descriptions. However, since these models are trained on large-scale internet data, they inevitably learn undesirable concepts, such as sensitive content, copyrighted material, and harmful or unethical elements. While previous works focus on permanently removing such concepts, this approach is often impractical, as it can degrade model performance and lead to irreversible loss of information. In this work, we introduce a novel concept-hiding approach that makes unwanted concepts inaccessible to public users while allowing controlled recovery when needed. Instead of erasing knowledge from the model entirely, we incorporate a learnable prompt into the cross-attention module, acting as a secure memory that suppresses the generation of hidden concepts unless a secret key is provided. This enables flexible access control -- ensuring that undesirable content cannot be easily generated while preserving the option to reinstate it under restricted conditions. Our method introduces a new paradigm where concept suppression and controlled recovery coexist, which was not feasible in prior works. We validate its effectiveness on the Stable Diffusion model, demonstrating that hiding concepts mitigate the risks of permanent removal while maintaining the model's overall capability.

View on arXiv
@article{bui2025_2403.12326,
  title={ Hiding and Recovering Knowledge in Text-to-Image Diffusion Models via Learnable Prompts },
  author={ Anh Bui and Khanh Doan and Trung Le and Paul Montague and Tamas Abraham and Dinh Phung },
  journal={arXiv preprint arXiv:2403.12326},
  year={ 2025 }
}
Comments on this paper