ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2101.05930
  4. Cited By
Neural Attention Distillation: Erasing Backdoor Triggers from Deep
  Neural Networks
v1v2 (latest)

Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks

International Conference on Learning Representations (ICLR), 2021
15 January 2021
Yige Li
Lingjuan Lyu
Nodens Koren
X. Lyu
Yue Liu
Jiabo He
    AAMLFedML
ArXiv (abs)PDFHTMLGithub (122★)

Papers citing "Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks"

50 / 282 papers shown
Title
PAD-FT: A Lightweight Defense for Backdoor Attacks via Data Purification
  and Fine-Tuning
PAD-FT: A Lightweight Defense for Backdoor Attacks via Data Purification and Fine-Tuning
Yukai Xu
Yujie Gu
Kouichi Sakurai
AAML
110
1
0
18 Sep 2024
On the Weaknesses of Backdoor-based Model Watermarking: An
  Information-theoretic Perspective
On the Weaknesses of Backdoor-based Model Watermarking: An Information-theoretic Perspective
Aoting Hu
Yanzhi Chen
Renjie Xie
Adrian Weller
219
2
0
10 Sep 2024
NoiseAttack: An Evasive Sample-Specific Multi-Targeted Backdoor Attack
  Through White Gaussian Noise
NoiseAttack: An Evasive Sample-Specific Multi-Targeted Backdoor Attack Through White Gaussian Noise
Abdullah Arafat Miah
Kaan Icer
Resit Sendag
Yu Bi
AAMLDiffM
173
1
0
03 Sep 2024
Backdoor Defense through Self-Supervised and Generative Learning
Backdoor Defense through Self-Supervised and Generative LearningBritish Machine Vision Conference (BMVC), 2024
Ivan Sabolić
Ivan Grubišić
Siniša Šegvić
AAML
235
1
0
02 Sep 2024
Fisher Information guided Purification against Backdoor Attacks
Fisher Information guided Purification against Backdoor AttacksConference on Computer and Communications Security (CCS), 2024
Nazmul Karim
Abdullah Al Arafat
Adnan Siraj Rakin
Zhishan Guo
Nazanin Rahnavard
AAML
277
5
0
01 Sep 2024
Protecting against simultaneous data poisoning attacks
Protecting against simultaneous data poisoning attacksInternational Conference on Learning Representations (ICLR), 2024
Neel Alex
Shoaib Ahmed Siddiqui
Amartya Sanyal
David M. Krueger
AAML
252
1
0
23 Aug 2024
BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models
BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models
Yige Li
Hanxun Huang
Yunhan Zhao
Jiabo He
Jun Sun
AAMLSILM
320
19
0
23 Aug 2024
MakeupAttack: Feature Space Black-box Backdoor Attack on Face
  Recognition via Makeup Transfer
MakeupAttack: Feature Space Black-box Backdoor Attack on Face Recognition via Makeup TransferEuropean Conference on Artificial Intelligence (ECAI), 2024
Ming Sun
Lihua Jing
Zixuan Zhu
Rui Wang
AAML
116
3
0
22 Aug 2024
A Survey of Trojan Attacks and Defenses to Deep Neural Networks
A Survey of Trojan Attacks and Defenses to Deep Neural Networks
Lingxin Jin
Xianyu Wen
Wei Jiang
Jinyu Zhan
AAML
208
3
0
15 Aug 2024
Attacks and Defenses for Generative Diffusion Models: A Comprehensive
  Survey
Attacks and Defenses for Generative Diffusion Models: A Comprehensive SurveyACM Computing Surveys (ACM CSUR), 2024
V. T. Truong
Luan Ba Dang
Long Bao Le
DiffMMedIm
326
37
0
06 Aug 2024
Revocable Backdoor for Deep Model Trading
Revocable Backdoor for Deep Model TradingEuropean Conference on Artificial Intelligence (ECAI), 2024
Yiran Xu
Nan Zhong
Zhenxing Qian
Xinpeng Zhang
AAML
232
1
0
01 Aug 2024
Flatness-aware Sequential Learning Generates Resilient Backdoors
Flatness-aware Sequential Learning Generates Resilient Backdoors
Hoang Pham
The-Anh Ta
Anh Tran
Khoa D. Doan
FedMLAAML
206
1
0
20 Jul 2024
UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening
UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening
Shuyang Cheng
Guangyu Shen
Kaiyuan Zhang
Guanhong Tao
Shengwei An
Hanxi Guo
Shiqing Ma
Xiangyu Zhang
AAML
212
0
0
16 Jul 2024
Wicked Oddities: Selectively Poisoning for Effective Clean-Label
  Backdoor Attacks
Wicked Oddities: Selectively Poisoning for Effective Clean-Label Backdoor Attacks
Quang H. Nguyen
Nguyen Ngoc-Hieu
The-Anh Ta
Thanh Nguyen-Tang
Kok-Seng Wong
Hoang Thanh-Tung
Khoa D. Doan
AAML
283
4
0
15 Jul 2024
Augmented Neural Fine-Tuning for Efficient Backdoor Purification
Augmented Neural Fine-Tuning for Efficient Backdoor Purification
Nazmul Karim
Abdullah Al Arafat
Umar Khalid
Zhishan Guo
Nazanin Rahnavard
AAML
229
6
0
14 Jul 2024
Distributed Backdoor Attacks on Federated Graph Learning and Certified
  Defenses
Distributed Backdoor Attacks on Federated Graph Learning and Certified Defenses
Yuxin Yang
Qiang Li
Jinyuan Jia
Yuan Hong
Binghui Wang
AAMLFedML
201
19
0
12 Jul 2024
PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning
PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning
Sizai Hou
Songze Li
Tayyebeh Jahani-Nezhad
Giuseppe Caire
FedML
402
7
0
12 Jul 2024
Model-agnostic clean-label backdoor mitigation in cybersecurity environments
Model-agnostic clean-label backdoor mitigation in cybersecurity environments
Giorgio Severi
Simona Boboila
J. Holodnak
K. Kratkiewicz
Rauf Izmailov
Alina Oprea
Alina Oprea
AAML
328
1
0
11 Jul 2024
Mitigating Backdoor Attacks using Activation-Guided Model Editing
Mitigating Backdoor Attacks using Activation-Guided Model Editing
Felix Hsieh
H. Nguyen
AprilPyone Maungmaung
Dmitrii Usynin
Isao Echizen
AAMLKELMLLMSV
212
0
0
10 Jul 2024
Understanding the Gains from Repeated Self-Distillation
Understanding the Gains from Repeated Self-Distillation
Divyansh Pareek
Simon S. Du
Sewoong Oh
275
9
0
05 Jul 2024
BEEAR: Embedding-based Adversarial Removal of Safety Backdoors in
  Instruction-tuned Language Models
BEEAR: Embedding-based Adversarial Removal of Safety Backdoors in Instruction-tuned Language Models
Yi Zeng
Weiyu Sun
Tran Ngoc Huynh
Dawn Song
Bo Li
Ruoxi Jia
AAMLLLMSV
180
42
0
24 Jun 2024
CBPF: Filtering Poisoned Data Based on Composite Backdoor Attack
CBPF: Filtering Poisoned Data Based on Composite Backdoor Attack
Hanfeng Xia
Haibo Hong
Ruili Wang
AAML
173
0
0
23 Jun 2024
Composite Concept Extraction through Backdooring
Composite Concept Extraction through BackdooringInternational Conference on Pattern Recognition (ICPR), 2024
Banibrata Ghosh
Haripriya Harikumar
Khoa D. Doan
Svetha Venkatesh
Santu Rana
252
0
0
19 Jun 2024
DLP: towards active defense against backdoor attacks with decoupled
  learning process
DLP: towards active defense against backdoor attacks with decoupled learning process
Zonghao Ying
Bin Wu
AAML
270
12
0
18 Jun 2024
NBA: defensive distillation for backdoor removal via neural behavior
  alignment
NBA: defensive distillation for backdoor removal via neural behavior alignment
Zonghao Ying
Bin Wu
AAML
107
13
0
16 Jun 2024
Unique Security and Privacy Threats of Large Language Models: A Comprehensive Survey
Unique Security and Privacy Threats of Large Language Models: A Comprehensive Survey
Shang Wang
Tianqing Zhu
B. Liu
Ming Ding
Dayong Ye
Dayong Ye
Wanlei Zhou
PILM
309
22
0
12 Jun 2024
Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against
  Personalized Federated Learning
Lurking in the shadows: Unveiling Stealthy Backdoor Attacks against Personalized Federated Learning
Xiaoting Lyu
Yufei Han
Wei Wang
Jingkai Liu
Yongsheng Zhu
Guangquan Xu
Jiqiang Liu
Xiangliang Zhang
AAMLFedML
218
12
0
10 Jun 2024
Chain-of-Scrutiny: Detecting Backdoor Attacks for Large Language Models
Chain-of-Scrutiny: Detecting Backdoor Attacks for Large Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2024
Xi Li
Ruofan Mao
Yusen Zhang
Renze Lou
Chen Wu
Jiaqi Wang
LRMAAML
413
20
0
10 Jun 2024
PSBD: Prediction Shift Uncertainty Unlocks Backdoor Detection
PSBD: Prediction Shift Uncertainty Unlocks Backdoor DetectionComputer Vision and Pattern Recognition (CVPR), 2024
Wei Li
Pin-Yu Chen
Sijia Liu
Ren Wang
AAML
279
4
0
09 Jun 2024
Mutual Information Guided Backdoor Mitigation for Pre-trained Encoders
Mutual Information Guided Backdoor Mitigation for Pre-trained Encoders
Tingxu Han
Weisong Sun
Ziqi Ding
Chunrong Fang
Hanwei Qian
Jiaxun Li
Zhenyu Chen
Xiangyu Zhang
AAML
400
11
0
05 Jun 2024
Robust Knowledge Distillation Based on Feature Variance Against
  Backdoored Teacher Model
Robust Knowledge Distillation Based on Feature Variance Against Backdoored Teacher Model
Jinyin Chen
Xiaoming Zhao
Haibin Zheng
Xiao Li
Sheng Xiang
Haifeng Guo
AAML
140
7
0
01 Jun 2024
Unveiling and Mitigating Backdoor Vulnerabilities based on Unlearning
  Weight Changes and Backdoor Activeness
Unveiling and Mitigating Backdoor Vulnerabilities based on Unlearning Weight Changes and Backdoor Activeness
Weilin Lin
Li Liu
Shaokui Wei
Jianze Li
Hui Xiong
AAML
214
4
0
30 May 2024
DiffPhysBA: Diffusion-based Physical Backdoor Attack against Person
  Re-Identification in Real-World
DiffPhysBA: Diffusion-based Physical Backdoor Attack against Person Re-Identification in Real-World
Wenli Sun
Xinyang Jiang
Dongsheng Li
Cairong Zhao
DiffMAAML
226
2
0
30 May 2024
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks
Towards Unified Robustness Against Both Backdoor and Adversarial Attacks
Zhenxing Niu
Yuyao Sun
Qiguang Miao
Rong Jin
Gang Hua
AAML
208
11
0
28 May 2024
Magnitude-based Neuron Pruning for Backdoor Defens
Magnitude-based Neuron Pruning for Backdoor Defens
Nan Li
Haoyu Jiang
Ping Yi
AAML
142
3
0
28 May 2024
Rethinking Pruning for Backdoor Mitigation: An Optimization Perspective
Rethinking Pruning for Backdoor Mitigation: An Optimization Perspective
Nan Li
Haiyang Yu
Ping Yi
AAML
117
1
0
28 May 2024
Partial train and isolate, mitigate backdoor attack
Partial train and isolate, mitigate backdoor attack
Yong Li
Han Gao
AAML
247
0
0
26 May 2024
Breaking the False Sense of Security in Backdoor Defense through
  Re-Activation Attack
Breaking the False Sense of Security in Backdoor Defense through Re-Activation Attack
Mingli Zhu
Siyuan Liang
Baoyuan Wu
AAML
372
23
0
25 May 2024
Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor
Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor
Shaokui Wei
Hongyuan Zha
Baoyuan Wu
AAML
219
7
0
25 May 2024
Invisible Backdoor Attack against Self-supervised Learning
Invisible Backdoor Attack against Self-supervised LearningComputer Vision and Pattern Recognition (CVPR), 2024
Hanrong Zhang
Zhenting Wang
Tingxu Han
Haoyang Ling
Chenlu Zhan
Jundong Li
Hongwei Wang
Shiqing Ma
Hongwei Wang
Shiqing Ma
AAMLSSL
273
1
0
23 May 2024
Unified Neural Backdoor Removal with Only Few Clean Samples through Unlearning and Relearning
Unified Neural Backdoor Removal with Only Few Clean Samples through Unlearning and RelearningIEEE Transactions on Information Forensics and Security (IEEE TIFS), 2024
Nay Myat Min
Long H. Pham
Jun Sun
MUAAML
287
1
0
23 May 2024
Nearest is Not Dearest: Towards Practical Defense against
  Quantization-conditioned Backdoor Attacks
Nearest is Not Dearest: Towards Practical Defense against Quantization-conditioned Backdoor Attacks
Boheng Li
Yishuo Cai
Haowei Li
Feng Xue
Zhifeng Li
Yiming Li
MQAAML
251
27
0
21 May 2024
Not All Prompts Are Secure: A Switchable Backdoor Attack Against
  Pre-trained Vision Transformers
Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transformers
Shengyuan Yang
Jiawang Bai
Kuofeng Gao
Yong-Liang Yang
Yiming Li
Shu-Tao Xia
AAMLSILM
283
5
0
17 May 2024
Poisoning-based Backdoor Attacks for Arbitrary Target Label with
  Positive Triggers
Poisoning-based Backdoor Attacks for Arbitrary Target Label with Positive TriggersInternational Joint Conference on Artificial Intelligence (IJCAI), 2024
Binxiao Huang
Jason Chun Lok Li
Chang Liu
Ngai Wong
AAML
274
2
0
09 May 2024
Unlearning Backdoor Attacks through Gradient-Based Model Pruning
Unlearning Backdoor Attacks through Gradient-Based Model Pruning
Kealan Dunnett
Reza Arablouei
Dimity Miller
Volkan Dedeoglu
Raja Jurdak
AAML
216
1
0
07 May 2024
The Victim and The Beneficiary: Exploiting a Poisoned Model to Train a
  Clean Model on Poisoned Data
The Victim and The Beneficiary: Exploiting a Poisoned Model to Train a Clean Model on Poisoned Data
Zixuan Zhu
Rui Wang
Cong Zou
Lihua Jing
AAMLFedML
225
5
0
17 Apr 2024
LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
Shuyang Cheng
Guanhong Tao
Yingqi Liu
Guangyu Shen
Shengwei An
Shiwei Feng
Xiangzhe Xu
Kaiyuan Zhang
Shiqing Ma
Xiangyu Zhang
AAML
205
9
0
25 Mar 2024
An Embarrassingly Simple Defense Against Backdoor Attacks On SSL
An Embarrassingly Simple Defense Against Backdoor Attacks On SSLIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2024
Aryan Satpathy
Nilaksh Nilaksh
D. Rajwade
AAML
136
0
0
23 Mar 2024
Securing Large Language Models: Threats, Vulnerabilities and Responsible Practices
Securing Large Language Models: Threats, Vulnerabilities and Responsible Practices
Sara Abdali
Richard Anarfi
C. Barberan
Jia He
Erfan Shayegani
PILM
393
46
0
19 Mar 2024
Invisible Backdoor Attack Through Singular Value Decomposition
Invisible Backdoor Attack Through Singular Value DecompositionChinese Conference on Pattern Recognition and Computer Vision (CPRCV), 2024
Wenmin Chen
Xiaowei Xu
AAML
220
2
0
18 Mar 2024
Previous
123456
Next