ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.11212
  4. Cited By
Deep Feature Space Trojan Attack of Neural Networks by Controlled
  Detoxification
v1v2 (latest)

Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification

AAAI Conference on Artificial Intelligence (AAAI), 2020
21 December 2020
Shuyang Cheng
Yingqi Liu
Shiqing Ma
Xinming Zhang
    AAML
ArXiv (abs)PDFHTML

Papers citing "Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification"

50 / 62 papers shown
TokenSwap: Backdoor Attack on the Compositional Understanding of Large Vision-Language Models
TokenSwap: Backdoor Attack on the Compositional Understanding of Large Vision-Language Models
Zhifang Zhang
Qiqi Tao
Jiaqi Lv
Na Zhao
Bingquan Shen
Joey Tianyi Zhou
190
1
0
29 Sep 2025
Temporal Logic-Based Multi-Vehicle Backdoor Attacks against Offline RL Agents in End-to-end Autonomous Driving
Temporal Logic-Based Multi-Vehicle Backdoor Attacks against Offline RL Agents in End-to-end Autonomous Driving
Xuan Chen
Shiwei Feng
Zikang Xiong
Shengwei An
Yunshu Mao
Lu Yan
Guanhong Tao
Wenbo Guo
Xiangyu Zhang
AAML
264
2
0
21 Sep 2025
Prototype Guided Backdoor Defense
Prototype Guided Backdoor Defense
Venkat Adithya Amula
Sunayana Samavedam
Saurabh Saini
Avani Gupta
Narayanan P J
AAML
352
1
0
26 Mar 2025
Revisiting Backdoor Attacks on Time Series Classification in the Frequency Domain
Revisiting Backdoor Attacks on Time Series Classification in the Frequency DomainThe Web Conference (WWW), 2025
Yuanmin Huang
Mi Zhang
Zhaoxiang Wang
Wenxuan Li
Min Yang
AAMLAI4TS
519
5
0
12 Mar 2025
Seal Your Backdoor with Variational Defense
Seal Your Backdoor with Variational Defense
Ivan Sabolić
Matej Grcić
Sinisa Segvic
AAML
1.2K
2
0
11 Mar 2025
AnywhereDoor: Multi-Target Backdoor Attacks on Object Detection
Jialin Lu
Junjie Shan
Ziqi Zhao
Ka-Ho Chow
AAML
496
3
0
09 Mar 2025
Energy-Latency Attacks: A New Adversarial Threat to Deep Learning
Energy-Latency Attacks: A New Adversarial Threat to Deep Learning
H. B. Meftah
W. Hamidouche
Sid Ahmed Fezza
Olivier Déforges
AAML
292
2
0
06 Mar 2025
Stealthy Backdoor Attack to Real-world Models in Android Apps
Stealthy Backdoor Attack to Real-world Models in Android Apps
Jiali Wei
Ming Fan
Xicheng Zhang
Wenjing Jiao
Jian Shu
Ting Liu
AAML
344
1
0
03 Jan 2025
UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion Models
UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion ModelsComputer Vision and Pattern Recognition (CVPR), 2024
Yuning Han
Bingyin Zhao
Rui Chu
Feng Luo
Biplab Sikdar
Yingjie Lao
DiffMAAML
680
6
0
16 Dec 2024
Adversarially Guided Stateful Defense Against Backdoor Attacks in
  Federated Deep Learning
Adversarially Guided Stateful Defense Against Backdoor Attacks in Federated Deep LearningAsia-Pacific Computer Systems Architecture Conference (ACSA), 2024
Hassan Ali
Surya Nepal
S. Kanhere
S. Jha
AAMLFedML
247
5
0
15 Oct 2024
UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening
UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening
Shuyang Cheng
Guangyu Shen
Kaiyuan Zhang
Guanhong Tao
Shengwei An
Hanxi Guo
Shiqing Ma
Xiangyu Zhang
AAML
253
3
0
16 Jul 2024
DLP: towards active defense against backdoor attacks with decoupled
  learning process
DLP: towards active defense against backdoor attacks with decoupled learning process
Zonghao Ying
Bin Wu
AAML
379
14
0
18 Jun 2024
Imperceptible Rhythm Backdoor Attacks: Exploring Rhythm Transformation
  for Embedding Undetectable Vulnerabilities on Speech Recognition
Imperceptible Rhythm Backdoor Attacks: Exploring Rhythm Transformation for Embedding Undetectable Vulnerabilities on Speech Recognition
Wenhan Yao
Jiangkun Yang
yongqiang He
Jia Liu
Weiping Wen
369
6
0
16 Jun 2024
Rethinking Pruning for Backdoor Mitigation: An Optimization Perspective
Rethinking Pruning for Backdoor Mitigation: An Optimization Perspective
Nan Li
Haiyang Yu
Ping Yi
AAML
186
1
0
28 May 2024
Unified Neural Backdoor Removal with Only Few Clean Samples through Unlearning and Relearning
Unified Neural Backdoor Removal with Only Few Clean Samples through Unlearning and RelearningIEEE Transactions on Information Forensics and Security (IEEE TIFS), 2024
Nay Myat Min
Long H. Pham
Jun Sun
MUAAML
432
3
0
23 May 2024
LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
Shuyang Cheng
Guanhong Tao
Yingqi Liu
Guangyu Shen
Shengwei An
Shiwei Feng
Xiangzhe Xu
Kaiyuan Zhang
Shiqing Ma
Xiangyu Zhang
AAML
274
16
0
25 Mar 2024
Impart: An Imperceptible and Effective Label-Specific Backdoor Attack
Impart: An Imperceptible and Effective Label-Specific Backdoor Attack
Jingke Zhao
Zan Wang
Yongwei Wang
Lanjun Wang
AAML
95
0
0
18 Mar 2024
Low-Frequency Black-Box Backdoor Attack via Evolutionary Algorithm
Low-Frequency Black-Box Backdoor Attack via Evolutionary Algorithm
Yanqi Qiao
Dazhuang Liu
Rui Wang
Kaitai Liang
AAML
305
1
0
23 Feb 2024
End-to-End Anti-Backdoor Learning on Images and Time Series
End-to-End Anti-Backdoor Learning on Images and Time Series
Yujing Jiang
Jiabo He
S. Erfani
Yige Li
James Bailey
295
1
0
06 Jan 2024
UltraClean: A Simple Framework to Train Robust Neural Networks against Backdoor Attacks
UltraClean: A Simple Framework to Train Robust Neural Networks against Backdoor Attacks
Bingyin Zhao
Yingjie Lao
AAML
389
2
0
17 Dec 2023
Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger
Towards Sample-specific Backdoor Attack with Clean Labels via Attribute TriggerIEEE Transactions on Dependable and Secure Computing (IEEE TDSC), 2023
Yiming Li
Mingyan Zhu
Junfeng Guo
Tao Wei
Shu-Tao Xia
Zhan Qin
AAML
461
7
0
03 Dec 2023
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems
Guangjing Wang
Ce Zhou
Yuanda Wang
Bocheng Chen
Hanqing Guo
Qiben Yan
AAMLSILM
518
10
0
20 Nov 2023
Reconstructive Neuron Pruning for Backdoor Defense
Reconstructive Neuron Pruning for Backdoor DefenseInternational Conference on Machine Learning (ICML), 2023
Yige Li
X. Lyu
Jiabo He
Nodens Koren
Lingjuan Lyu
Yue Liu
Yugang Jiang
AAML
388
83
0
24 May 2023
UNICORN: A Unified Backdoor Trigger Inversion Framework
UNICORN: A Unified Backdoor Trigger Inversion FrameworkInternational Conference on Learning Representations (ICLR), 2023
Zhenting Wang
Kai Mei
Juan Zhai
Shiqing Ma
LLMSV
252
68
0
05 Apr 2023
A Universal Identity Backdoor Attack against Speaker Verification based on Siamese Network
A Universal Identity Backdoor Attack against Speaker Verification based on Siamese NetworkInterspeech (Interspeech), 2022
Haodong Zhao
Wei Du
Junjie Guo
Gongshen Liu
AAML
359
1
0
28 Mar 2023
Mask and Restore: Blind Backdoor Defense at Test Time with Masked Autoencoder
Mask and Restore: Blind Backdoor Defense at Test Time with Masked Autoencoder
Tao Sun
Lu Pang
Chao Chen
Haibin Ling
Haibin Ling
AAML
421
10
0
27 Mar 2023
Detecting Backdoors in Pre-trained Encoders
Detecting Backdoors in Pre-trained EncodersComputer Vision and Pattern Recognition (CVPR), 2023
Shiwei Feng
Guanhong Tao
Shuyang Cheng
Guangyu Shen
Xiangzhe Xu
Yingqi Liu
Kaiyuan Zhang
Shiqing Ma
Xiangyu Zhang
396
86
0
23 Mar 2023
Attacks in Adversarial Machine Learning: A Systematic Survey from the
  Life-cycle Perspective
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective
Baoyuan Wu
Zihao Zhu
Li Liu
Qingshan Liu
Zhaofeng He
Siwei Lyu
AAML
531
35
0
19 Feb 2023
Distilling Cognitive Backdoor Patterns within an Image
Distilling Cognitive Backdoor Patterns within an ImageInternational Conference on Learning Representations (ICLR), 2023
Hanxun Huang
Jiabo He
S. Erfani
James Bailey
AAML
482
36
0
26 Jan 2023
BDMMT: Backdoor Sample Detection for Language Models through Model
  Mutation Testing
BDMMT: Backdoor Sample Detection for Language Models through Model Mutation TestingIEEE Transactions on Information Forensics and Security (IEEE TIFS), 2023
Jiali Wei
Ming Fan
Wenjing Jiao
Wuxia Jin
Ting Liu
AAML
254
32
0
25 Jan 2023
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
BEAGLE: Forensics of Deep Learning Backdoor Attack for Better DefenseNetwork and Distributed System Security Symposium (NDSS), 2023
Shuyang Cheng
Guanhong Tao
Yingqi Liu
Shengwei An
Xiangzhe Xu
...
Guangyu Shen
Kaiyuan Zhang
Qiuling Xu
Shiqing Ma
Xiangyu Zhang
AAML
267
22
0
16 Jan 2023
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of
  Backdoor Effects in Trojaned Machine Learning Models
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning ModelsIEEE Symposium on Security and Privacy (IEEE S&P), 2022
Rui Zhu
Di Tang
Siyuan Tang
Luyi Xing
Haixu Tang
AAMLFedML
279
15
0
09 Dec 2022
Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Guanhong Tao
Zhenting Wang
Shuyang Cheng
Shiqing Ma
Shengwei An
Yingqi Liu
Guangyu Shen
Zhuo Zhang
Yunshu Mao
Xiangyu Zhang
SILM
281
18
0
29 Nov 2022
Don't Watch Me: A Spatio-Temporal Trojan Attack on
  Deep-Reinforcement-Learning-Augment Autonomous Driving
Don't Watch Me: A Spatio-Temporal Trojan Attack on Deep-Reinforcement-Learning-Augment Autonomous Driving
Yinbo Yu
Jiajia Liu
178
4
0
22 Nov 2022
Backdoor Attacks on Time Series: A Generative Approach
Backdoor Attacks on Time Series: A Generative Approach
Yujing Jiang
Jiabo He
S. Erfani
James Bailey
AAMLAI4TS
413
21
0
15 Nov 2022
Going In Style: Audio Backdoors Through Stylistic Transformations
Going In Style: Audio Backdoors Through Stylistic TransformationsIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2022
Stefanos Koffas
Luca Pajola
S. Picek
Mauro Conti
384
35
0
06 Nov 2022
Rethinking the Reverse-engineering of Trojan Triggers
Rethinking the Reverse-engineering of Trojan TriggersNeural Information Processing Systems (NeurIPS), 2022
Zhenting Wang
Kai Mei
Hailun Ding
Juan Zhai
Shiqing Ma
266
57
0
27 Oct 2022
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
  Learning
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated LearningInternational Conference on Learning Representations (ICLR), 2022
Kaiyuan Zhang
Guanhong Tao
Qiuling Xu
Shuyang Cheng
Shengwei An
...
Shiwei Feng
Guangyu Shen
Pin-Yu Chen
Shiqing Ma
Xiangyu Zhang
FedML
299
74
0
23 Oct 2022
Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class
Marksman Backdoor: Backdoor Attacks with Arbitrary Target ClassNeural Information Processing Systems (NeurIPS), 2022
Khoa D. Doan
Yingjie Lao
Ping Li
253
54
0
17 Oct 2022
ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled
  neural networks
ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks
Eleanor Clifford
Ilia Shumailov
Yiren Zhao
Ross J. Anderson
Robert D. Mullins
514
18
0
30 Sep 2022
MOVE: Effective and Harmless Ownership Verification via Embedded External Features
MOVE: Effective and Harmless Ownership Verification via Embedded External FeaturesIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022
Yiming Li
Linghui Zhu
Yang Liu
Yang Bai
Yong Jiang
Shutao Xia
Xiaochun Cao
Kui Ren
AAML
400
25
0
04 Aug 2022
Backdoor Attacks on Crowd Counting
Backdoor Attacks on Crowd CountingACM Multimedia (ACM MM), 2022
Yuhua Sun
Tailai Zhang
Jiabo He
Pan Zhou
Jian Lou
Zichuan Xu
Xing Di
Yu Cheng
Lichao
AAML
292
17
0
12 Jul 2022
DECK: Model Hardening for Defending Pervasive Backdoors
DECK: Model Hardening for Defending Pervasive Backdoors
Guanhong Tao
Yingqi Liu
Shuyang Cheng
Shengwei An
Zhuo Zhang
Qiuling Xu
Guangyu Shen
Xiangyu Zhang
AAML
362
7
0
18 Jun 2022
BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural
  Networks via Image Quantization and Contrastive Adversarial Learning
BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial LearningComputer Vision and Pattern Recognition (CVPR), 2022
Zhenting Wang
Juan Zhai
Shiqing Ma
AAML
317
151
0
26 May 2022
Do You Think You Can Hold Me? The Real Challenge of Problem-Space
  Evasion Attacks
Do You Think You Can Hold Me? The Real Challenge of Problem-Space Evasion Attacks
Harel Berger
A. Dvir
Chen Hajaj
Rony Ronen
AAML
285
3
0
09 May 2022
VPN: Verification of Poisoning in Neural Networks
VPN: Verification of Poisoning in Neural Networks
Youcheng Sun
Muhammad Usman
D. Gopinath
C. Păsăreanu
AAML
150
2
0
08 May 2022
Wild Patterns Reloaded: A Survey of Machine Learning Security against
  Training Data Poisoning
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data PoisoningACM Computing Surveys (ACM CSUR), 2022
Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Sebastiano Vascon
Werner Zellinger
Bernhard A. Moser
Alina Oprea
Battista Biggio
Marcello Pelillo
Fabio Roli
AAML
479
188
0
04 May 2022
An Adaptive Black-box Backdoor Detection Method for Deep Neural Networks
An Adaptive Black-box Backdoor Detection Method for Deep Neural Networks
Xinqiao Zhang
Huili Chen
Ke Huang
F. Koushanfar
AAML
267
1
0
08 Apr 2022
A Survey of Neural Trojan Attacks and Defenses in Deep Learning
A Survey of Neural Trojan Attacks and Defenses in Deep Learning
Jie Wang
Ghulam Mubashar Hassan
Naveed Akhtar
AAML
257
29
0
15 Feb 2022
Training with More Confidence: Mitigating Injected and Natural Backdoors
  During Training
Training with More Confidence: Mitigating Injected and Natural Backdoors During TrainingNeural Information Processing Systems (NeurIPS), 2022
Zhenting Wang
Hailun Ding
Juan Zhai
Shiqing Ma
AAML
389
58
0
13 Feb 2022
12
Next
Page 1 of 2