ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.03675
  4. Cited By
Dynamic Backdoor Attacks Against Machine Learning Models
v1v2 (latest)

Dynamic Backdoor Attacks Against Machine Learning Models

European Symposium on Security and Privacy (EuroS&P), 2020
7 March 2020
A. Salem
Rui Wen
Michael Backes
Shiqing Ma
Yang Zhang
    AAML
ArXiv (abs)PDFHTML

Papers citing "Dynamic Backdoor Attacks Against Machine Learning Models"

50 / 148 papers shown
DarkHash: A Data-Free Backdoor Attack Against Deep Hashing
DarkHash: A Data-Free Backdoor Attack Against Deep HashingIEEE Transactions on Information Forensics and Security (TIFS), 2025
Ziqi Zhou
Menghao Deng
Yufei Song
Hangtao Zhang
Wei Wan
Shengshan Hu
Minghui Li
Leo Yu Zhang
Dezhong Yao
399
7
0
09 Oct 2025
Unsupervised Backdoor Detection and Mitigation for Spiking Neural Networks
Unsupervised Backdoor Detection and Mitigation for Spiking Neural Networks
Jiachen Li
Bang Wu
Xiaoyu Xia
Xiaoning Liu
Xun Yi
Xiuzhen Zhang
AAML
157
0
0
08 Oct 2025
Geometry-Aware Backdoor Attacks: Leveraging Curvature in Hyperbolic Embeddings
Geometry-Aware Backdoor Attacks: Leveraging Curvature in Hyperbolic Embeddings
Ali Baheri
AAMLLLMSV
250
0
0
07 Oct 2025
NeuroDeX: Unlocking Diverse Support in Decompiling Deep Neural Network Executables
NeuroDeX: Unlocking Diverse Support in Decompiling Deep Neural Network Executables
Yilin Li
Guozhu Meng
Mingyang Sun
Yanzhong Wang
Kun Sun
Hailong Chang
Yuekang Li
184
0
0
08 Sep 2025
Sealing The Backdoor: Unlearning Adversarial Text Triggers In Diffusion Models Using Knowledge Distillation
Sealing The Backdoor: Unlearning Adversarial Text Triggers In Diffusion Models Using Knowledge Distillation
Ashwath Vaithinathan Aravindan
Abha Jha
Matthew Salaway
Atharva Sandeep Bhide
Duygu Nur Yaldiz
DiffMAAML
192
0
0
20 Aug 2025
BadBlocks: Lightweight and Stealthy Backdoor Threat in Text-to-Image Diffusion Models
BadBlocks: Lightweight and Stealthy Backdoor Threat in Text-to-Image Diffusion Models
Yu Pan
Jiahao Chen
Lin Wang
Bingrong Dai
Yi Du
AAMLDiffM
367
0
0
05 Aug 2025
ConSeg: Contextual Backdoor Attack Against Semantic Segmentation
ConSeg: Contextual Backdoor Attack Against Semantic Segmentation
Bilal Hussain Abbasi
Zirui Gong
Yanjun Zhang
Shang Gao
A. Robles-Kelly
Leo Yu Zhang
291
0
0
26 Jul 2025
Can In-Context Reinforcement Learning Recover From Reward Poisoning Attacks?
Can In-Context Reinforcement Learning Recover From Reward Poisoning Attacks?
Paulius Sasnauskas
Yiğit Yalın
Goran Radanović
286
0
0
07 Jun 2025
Variance-Based Defense Against Blended Backdoor Attacks
Variance-Based Defense Against Blended Backdoor Attacks
Sujeevan Aseervatham
Achraf Kerzazi
Younès Bennani
AAML
302
0
0
02 Jun 2025
The Ripple Effect: On Unforeseen Complications of Backdoor Attacks
The Ripple Effect: On Unforeseen Complications of Backdoor Attacks
Rui Zhang
Yun Shen
Hongwei Li
Wenbo Jiang
Hanxiao Chen
Yuan Zhang
Guowen Xu
Yang Zhang
SILMAAML
260
0
0
16 May 2025
DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data
DeBackdoor: A Deductive Framework for Detecting Backdoor Attacks on Deep Models with Limited Data
Dorde Popovic
Amin Sadeghi
Ting Yu
Sanjay Chawla
Issa M. Khalil
AAML
387
3
0
27 Mar 2025
Data Poisoning in Deep Learning: A Survey
Data Poisoning in Deep Learning: A Survey
Pinlong Zhao
Weiyao Zhu
Pengfei Jiao
Di Gao
Ou Wu
AAML
557
23
0
27 Mar 2025
Prototype Guided Backdoor Defense
Prototype Guided Backdoor Defense
Venkat Adithya Amula
Sunayana Samavedam
Saurabh Saini
Avani Gupta
Narayanan P J
AAML
355
1
0
26 Mar 2025
Unifying Perplexing Behaviors in Modified BP Attributions through Alignment Perspective
Unifying Perplexing Behaviors in Modified BP Attributions through Alignment Perspective
Guanhua Zheng
Jitao Sang
Changsheng Xu
AAMLFAtt
347
0
0
14 Mar 2025
Online Gradient Boosting Decision Tree: In-Place Updates for Efficient Adding/Deleting Data
Online Gradient Boosting Decision Tree: In-Place Updates for Efficient Adding/Deleting Data
Huawei Lin
Jun Woo Chung
Yingjie Lao
Weijie Zhao
364
1
0
03 Feb 2025
Backdoor Attack with Invisible Triggers Based on Model Architecture Modification
Backdoor Attack with Invisible Triggers Based on Model Architecture Modification
Yuan Ma
Jiankang Wei
Jiankang Wei
Jinmeng Tang
Xiaoyu Zhang
591
0
0
22 Dec 2024
Data Free Backdoor Attacks
Data Free Backdoor AttacksNeural Information Processing Systems (NeurIPS), 2024
Bochuan Cao
Jinyuan Jia
Chuxuan Hu
Wenbo Guo
Zhen Xiang
Jinghui Chen
Yue Liu
Dawn Song
AAML
408
2
0
09 Dec 2024
LADDER: Multi-objective Backdoor Attack via Evolutionary Algorithm
LADDER: Multi-objective Backdoor Attack via Evolutionary AlgorithmNetwork and Distributed System Security Symposium (NDSS), 2024
Dazhuang Liu
Yanqi Qiao
Rui Wang
K. Liang
Georgios Smaragdakis
AAML
395
1
0
28 Nov 2024
Neutralizing Backdoors through Information Conflicts for Large Language
  Models
Neutralizing Backdoors through Information Conflicts for Large Language Models
Chen Chen
Yuchen Sun
Xueluan Gong
Jiaxin Gao
K. Lam
KELMAAML
423
3
0
27 Nov 2024
Defending Deep Regression Models against Backdoor Attacks
Defending Deep Regression Models against Backdoor Attacks
Lingyu Du
Yupei Liu
Jinyuan Jia
Guohao Lan
AAML
328
3
0
07 Nov 2024
Act in Collusion: A Persistent Distributed Multi-Target Backdoor in
  Federated Learning
Act in Collusion: A Persistent Distributed Multi-Target Backdoor in Federated Learning
Tao Liu
Wu Yang
Chen Xu
Jiguang Lv
Huanran Wang
Yuhang Zhang
Shuchun Xu
Dapeng Man
AAMLFedML
386
2
0
06 Nov 2024
Psychometrics for Hypnopaedia-Aware Machinery via Chaotic Projection of
  Artificial Mental Imagery
Psychometrics for Hypnopaedia-Aware Machinery via Chaotic Projection of Artificial Mental Imagery
Ching-Chun Chang
Kai Gao
Shuying Xu
Anastasia Kordoni
Christopher Leckie
Isao Echizen
204
0
0
29 Sep 2024
Trustworthy Text-to-Image Diffusion Models: A Timely and Focused Survey
Trustworthy Text-to-Image Diffusion Models: A Timely and Focused Survey
Yi Zhang
Zhen Chen
Chih-Hong Cheng
Wenjie Ruan
Xiaowei Huang
Dezong Zhao
David Flynn
Siddartha Khastgir
Xingyu Zhao
MedIm
580
7
0
26 Sep 2024
Persistent Backdoor Attacks in Continual Learning
Persistent Backdoor Attacks in Continual Learning
Zhen Guo
Abhinav Kumar
R. Tourani
AAML
407
9
0
20 Sep 2024
Understanding Data Importance in Machine Learning Attacks: Does Valuable
  Data Pose Greater Harm?
Understanding Data Importance in Machine Learning Attacks: Does Valuable Data Pose Greater Harm?Network and Distributed System Security Symposium (NDSS), 2024
Rui Wen
Michael Backes
Yang Zhang
TDIAAML
288
5
0
05 Sep 2024
Model Hijacking Attack in Federated Learning
Model Hijacking Attack in Federated Learning
Zheng Li
Siyuan Wu
Ruichuan Chen
Paarijaat Aditya
Istemi Ekin Akkus
Manohar Vanga
Min Zhang
Hao Li
Yang Zhang
FedMLAAML
207
0
0
04 Aug 2024
Vera Verto: Multimodal Hijacking Attack
Vera Verto: Multimodal Hijacking Attack
Minxing Zhang
Wenhao Yang
H. Bidkhori
Yang Zhang
AAML
296
1
0
31 Jul 2024
SeqMIA: Sequential-Metric Based Membership Inference Attack
SeqMIA: Sequential-Metric Based Membership Inference Attack
Hao Li
Zheng Li
Siyuan Wu
Chengrui Hu
Yutong Ye
Min Zhang
Dengguo Feng
Yang Zhang
238
29
0
21 Jul 2024
UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening
UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening
Shuyang Cheng
Guangyu Shen
Kaiyuan Zhang
Guanhong Tao
Shengwei An
Hanxi Guo
Shiqing Ma
Xiangyu Zhang
AAML
255
3
0
16 Jul 2024
Wicked Oddities: Selectively Poisoning for Effective Clean-Label
  Backdoor Attacks
Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks
Quang H. Nguyen
Nguyen Ngoc-Hieu
The-Anh Ta
Thanh Nguyen-Tang
Kok-Seng Wong
Hoang Thanh-Tung
Khoa D. Doan
AAML
384
6
0
15 Jul 2024
Tracing Back the Malicious Clients in Poisoning Attacks to Federated Learning
Tracing Back the Malicious Clients in Poisoning Attacks to Federated Learning
Yuqi Jia
Minghong Fang
Hongbin Liu
Jinghuai Zhang
Neil Zhenqiang Gong
AAML
275
4
0
09 Jul 2024
SOS! Soft Prompt Attack Against Open-Source Large Language Models
SOS! Soft Prompt Attack Against Open-Source Large Language Models
Ziqing Yang
Michael Backes
Yang Zhang
Ahmed Salem
AAML
267
10
0
03 Jul 2024
DLP: towards active defense against backdoor attacks with decoupled
  learning process
DLP: towards active defense against backdoor attacks with decoupled learning process
Zonghao Ying
Bin Wu
AAML
382
14
0
18 Jun 2024
Evaluating the Efficacy of Prompt-Engineered Large Multimodal Models
  Versus Fine-Tuned Vision Transformers in Image-Based Security Applications
Evaluating the Efficacy of Prompt-Engineered Large Multimodal Models Versus Fine-Tuned Vision Transformers in Image-Based Security Applications
Fouad Trad
Ali Chehab
MLLM
274
7
0
26 Mar 2024
LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
Shuyang Cheng
Guanhong Tao
Yingqi Liu
Guangyu Shen
Shengwei An
Shiwei Feng
Xiangzhe Xu
Kaiyuan Zhang
Shiqing Ma
Xiangyu Zhang
AAML
277
16
0
25 Mar 2024
Low-Frequency Black-Box Backdoor Attack via Evolutionary Algorithm
Low-Frequency Black-Box Backdoor Attack via Evolutionary Algorithm
Yanqi Qiao
Dazhuang Liu
Rui Wang
Kaitai Liang
AAML
311
1
0
23 Feb 2024
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Hongbin Liu
Michael K. Reiter
Neil Zhenqiang Gong
AAML
406
5
0
22 Feb 2024
Test-Time Backdoor Attacks on Multimodal Large Language Models
Test-Time Backdoor Attacks on Multimodal Large Language Models
Dong Lu
Tianyu Pang
Chao Du
Qian Liu
Xianjun Yang
Min Lin
AAML
499
44
0
13 Feb 2024
Game of Trojans: Adaptive Adversaries Against Output-based
  Trojaned-Model Detectors
Game of Trojans: Adaptive Adversaries Against Output-based Trojaned-Model Detectors
D. Sahabandu
Xiaojun Xu
Arezoo Rajabi
Luyao Niu
Bhaskar Ramasubramanian
Bo Li
Radha Poovendran
AAML
265
2
0
12 Feb 2024
Architectural Neural Backdoors from First Principles
Architectural Neural Backdoors from First PrinciplesIEEE Symposium on Security and Privacy (S&P), 2024
Harry Langford
Ilia Shumailov
Yiren Zhao
Robert D. Mullins
Nicolas Papernot
AAML
268
11
0
10 Feb 2024
DisDet: Exploring Detectability of Backdoor Attack on Diffusion Models
DisDet: Exploring Detectability of Backdoor Attack on Diffusion Models
Yang Sui
Huy Phan
Jinqi Xiao
Tian-Di Zhang
Zijie Tang
Cong Shi
Yan Wang
Yingying Chen
Bo Yuan
DiffMAAML
236
27
0
05 Feb 2024
Preference Poisoning Attacks on Reward Model Learning
Preference Poisoning Attacks on Reward Model Learning
Junlin Wu
Zhenghao Hu
Chaowei Xiao
Chenguang Wang
Ning Zhang
Yevgeniy Vorobeychik
AAML
330
12
0
02 Feb 2024
Manipulating Predictions over Discrete Inputs in Machine Teaching
Manipulating Predictions over Discrete Inputs in Machine Teaching
Xiaodong Wu
Yufei Han
H. Dahrouj
Jianbing Ni
Zhenwen Liang
Xiangliang Zhang
299
0
0
31 Jan 2024
Imperio: Language-Guided Backdoor Attacks for Arbitrary Model Control
Imperio: Language-Guided Backdoor Attacks for Arbitrary Model ControlInternational Joint Conference on Artificial Intelligence (IJCAI), 2024
Ka-Ho Chow
Wenqi Wei
Lei Yu
414
9
0
02 Jan 2024
Data and Model Poisoning Backdoor Attacks on Wireless Federated
  Learning, and the Defense Mechanisms: A Comprehensive Survey
Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive SurveyIEEE Communications Surveys and Tutorials (COMST), 2023
Yichen Wan
Youyang Qu
Wei Ni
Yong Xiang
Longxiang Gao
Ekram Hossain
AAML
346
97
0
14 Dec 2023
Activation Gradient based Poisoned Sample Detection Against Backdoor
  Attacks
Activation Gradient based Poisoned Sample Detection Against Backdoor Attacks
Danni Yuan
Shaokui Wei
Ruotong Wang
Li Liu
Baoyuan Wu
AAML
424
15
0
11 Dec 2023
The Philosopher's Stone: Trojaning Plugins of Large Language Models
The Philosopher's Stone: Trojaning Plugins of Large Language ModelsNetwork and Distributed System Security Symposium (NDSS), 2023
Tian Dong
Minhui Xue
Guoxing Chen
Rayne Holland
Shaofeng Li
Yan Meng
Zhen Liu
Haojin Zhu
AAML
516
37
0
01 Dec 2023
RAEDiff: Denoising Diffusion Probabilistic Models Based Reversible
  Adversarial Examples Self-Generation and Self-Recovery
RAEDiff: Denoising Diffusion Probabilistic Models Based Reversible Adversarial Examples Self-Generation and Self-Recovery
Fan Xing
Xiaoyi Zhou
Xuefeng Fan
Zhuo Tian
Yan Zhao
DiffM
300
0
0
25 Oct 2023
On the Detection of Image-Scaling Attacks in Machine Learning
On the Detection of Image-Scaling Attacks in Machine LearningAsia-Pacific Computer Systems Architecture Conference (ACSA), 2023
Erwin Quiring
Andreas Müller
Konrad Rieck
AAML
194
3
0
23 Oct 2023
Last One Standing: A Comparative Analysis of Security and Privacy of
  Soft Prompt Tuning, LoRA, and In-Context Learning
Last One Standing: A Comparative Analysis of Security and Privacy of Soft Prompt Tuning, LoRA, and In-Context Learning
Rui Wen
Tianhao Wang
Michael Backes
Yang Zhang
Ahmed Salem
AAML
250
18
0
17 Oct 2023
123
Next
Page 1 of 3