ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2303.05762
  4. Cited By
TrojDiff: Trojan Attacks on Diffusion Models with Diverse Targets

TrojDiff: Trojan Attacks on Diffusion Models with Diverse Targets

10 March 2023
Weixin Chen
D. Song
Bo-wen Li
    DiffM
ArXivPDFHTML

Papers citing "TrojDiff: Trojan Attacks on Diffusion Models with Diverse Targets"

17 / 17 papers shown
Title
Backdoor Defense in Diffusion Models via Spatial Attention Unlearning
Backdoor Defense in Diffusion Models via Spatial Attention Unlearning
Abha Jha
Ashwath Vaithinathan Aravindan
Matthew Salaway
Atharva Sandeep Bhide
Duygu Nur Yaldiz
AAML
70
0
0
21 Apr 2025
A Dual-Purpose Framework for Backdoor Defense and Backdoor Amplification in Diffusion Models
A Dual-Purpose Framework for Backdoor Defense and Backdoor Amplification in Diffusion Models
Vu Tuan Truong Long
Bao Le
DiffM
AAML
147
0
0
26 Feb 2025
BackdoorDM: A Comprehensive Benchmark for Backdoor Learning in Diffusion Model
BackdoorDM: A Comprehensive Benchmark for Backdoor Learning in Diffusion Model
Weilin Lin
Nanjun Zhou
Y. Wang
Jianze Li
Hui Xiong
Li Liu
AAML
DiffM
131
0
0
17 Feb 2025
UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion Models
UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion Models
Yuning Han
Bingyin Zhao
Rui Chu
Feng Luo
Biplab Sikdar
Yingjie Lao
DiffM
AAML
70
1
0
16 Dec 2024
How to Backdoor Consistency Models?
How to Backdoor Consistency Models?
Chengen Wang
Murat Kantarcioglu
DiffM
AAML
127
1
0
14 Oct 2024
Score Forgetting Distillation: A Swift, Data-Free Method for Machine Unlearning in Diffusion Models
Score Forgetting Distillation: A Swift, Data-Free Method for Machine Unlearning in Diffusion Models
Tianqi Chen
Shujian Zhang
Mingyuan Zhou
DiffM
69
3
0
17 Sep 2024
Attacks and Defenses for Generative Diffusion Models: A Comprehensive
  Survey
Attacks and Defenses for Generative Diffusion Models: A Comprehensive Survey
V. T. Truong
Luan Ba Dang
Long Bao Le
DiffM
MedIm
38
16
0
06 Aug 2024
Backdoor Attacks against Image-to-Image Networks
Backdoor Attacks against Image-to-Image Networks
Wenbo Jiang
Hongwei Li
Jiaming He
Rui Zhang
Guowen Xu
Tianwei Zhang
Rongxing Lu
AAML
33
2
0
15 Jul 2024
On Exact Inversion of DPM-Solvers
On Exact Inversion of DPM-Solvers
Seongmin Hong
Kyeonghyun Lee
Suh Yoon Jeon
Hyewon Bae
Se Young Chun
DiffM
19
21
0
30 Nov 2023
AI-Generated Content (AIGC) for Various Data Modalities: A Survey
AI-Generated Content (AIGC) for Various Data Modalities: A Survey
Lin Geng Foo
Hossein Rahmani
J. Liu
65
31
0
27 Aug 2023
Text-to-Image Diffusion Models can be Easily Backdoored through
  Multimodal Data Poisoning
Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning
Shengfang Zhai
Yinpeng Dong
Qingni Shen
Shih-Chieh Pu
Yuejian Fang
Hang Su
30
70
0
07 May 2023
Attacks in Adversarial Machine Learning: A Systematic Survey from the
  Life-cycle Perspective
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective
Baoyuan Wu
Zihao Zhu
Li Liu
Qingshan Liu
Zhaofeng He
Siwei Lyu
AAML
44
21
0
19 Feb 2023
Diffusion Models in Vision: A Survey
Diffusion Models in Vision: A Survey
Florinel-Alin Croitoru
Vlad Hondru
Radu Tudor Ionescu
M. Shah
DiffM
VLM
MedIm
191
1,138
0
10 Sep 2022
Backdoor Attack is a Devil in Federated GAN-based Medical Image
  Synthesis
Backdoor Attack is a Devil in Federated GAN-based Medical Image Synthesis
Ruinan Jin
Xiaoxiao Li
AAML
FedML
MedIm
31
11
0
02 Jul 2022
Palette: Image-to-Image Diffusion Models
Palette: Image-to-Image Diffusion Models
Chitwan Saharia
William Chan
Huiwen Chang
Chris A. Lee
Jonathan Ho
Tim Salimans
David J. Fleet
Mohammad Norouzi
DiffM
VLM
325
1,586
0
10 Nov 2021
Zero-Shot Text-to-Image Generation
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
253
4,764
0
24 Feb 2021
Concealed Data Poisoning Attacks on NLP Models
Concealed Data Poisoning Attacks on NLP Models
Eric Wallace
Tony Zhao
Shi Feng
Sameer Singh
SILM
11
18
0
23 Oct 2020
1