EmoAttack: Emotion-to-Image Diffusion Models for Emotional Backdoor Generation

Text-to-image diffusion models can generate realistic images based on textual inputs, enabling users to convey their opinions visually through language. Meanwhile, within language, emotion plays a crucial role in expressing personal opinions in our daily lives and the inclusion of maliciously negative content can lead users astray, exacerbating negative emotions. Recognizing the success of diffusion models and the significance of emotion, we investigate a previously overlooked risk associated with text-to-image diffusion models, that is, utilizing emotion in the input texts to introduce negative content and provoke unfavorable emotions in users. Specifically, we identify a new backdoor attack, i.e., emotion-aware backdoor attack (EmoAttack), which introduces malicious negative content triggered by emotional texts during image generation. We formulate such an attack as a diffusion personalization problem to avoid extensive model retraining and propose the EmoBooth. Unlike existing personalization methods, our approach fine-tunes a pre-trained diffusion model by establishing a mapping between a cluster of emotional words and a given reference image containing malicious negative content. To validate the effectiveness of our method, we built a dataset and conducted extensive analysis and discussion about its effectiveness. Given consumers' widespread use of diffusion models, uncovering this threat is critical for society.
View on arXiv@article{wei2025_2406.15863, title={ EmoAttack: Emotion-to-Image Diffusion Models for Emotional Backdoor Generation }, author={ Tianyu Wei and Shanmin Pang and Qi Guo and Yizhuo Ma and Xiaofeng Cao and Ming-Ming Cheng and Qing Guo }, journal={arXiv preprint arXiv:2406.15863}, year={ 2025 } }