ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.05815
30
0

Parasite: A Steganography-based Backdoor Attack Framework for Diffusion Models

8 April 2025
J. Chen
Yu Pan
Yi Du
Chunkai Wu
Lin Wang
    DiffM
ArXivPDFHTML
Abstract

Recently, the diffusion model has gained significant attention as one of the most successful image generation models, which can generate high-quality images by iteratively sampling noise. However, recent studies have shown that diffusion models are vulnerable to backdoor attacks, allowing attackers to enter input data containing triggers to activate the backdoor and generate their desired output. Existing backdoor attack methods primarily focused on target noise-to-image and text-to-image tasks, with limited work on backdoor attacks in image-to-image tasks. Furthermore, traditional backdoor attacks often rely on a single, conspicuous trigger to generate a fixed target image, lacking concealability and flexibility. To address these limitations, we propose a novel backdoor attack method called "Parasite" for image-to-image tasks in diffusion models, which not only is the first to leverage steganography for triggers hiding, but also allows attackers to embed the target content as a backdoor trigger to achieve a more flexible attack. "Parasite" as a novel attack method effectively bypasses existing detection frameworks to execute backdoor attacks. In our experiments, "Parasite" achieved a 0 percent backdoor detection rate against the mainstream defense frameworks. In addition, in the ablation study, we discuss the influence of different hiding coefficients on the attack results. You can find our code atthis https URL.

View on arXiv
@article{chen2025_2504.05815,
  title={ Parasite: A Steganography-based Backdoor Attack Framework for Diffusion Models },
  author={ Jiahao Chen and Yu Pan and Yi Du and Chunkai Wu and Lin Wang },
  journal={arXiv preprint arXiv:2504.05815},
  year={ 2025 }
}
Comments on this paper