Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2008.01761
Cited By
Can Adversarial Weight Perturbations Inject Neural Backdoors?
4 August 2020
Siddhant Garg
Adarsh Kumar
Vibhor Goel
Yingyu Liang
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Can Adversarial Weight Perturbations Inject Neural Backdoors?"
50 / 56 papers shown
Title
Detecting Discrepancies Between AI-Generated and Natural Images Using Uncertainty
Jun Nie
Yonggang Zhang
Tongliang Liu
Y. Cheung
Bo Han
Xinmei Tian
UQCV
83
0
0
08 Dec 2024
Solving Trojan Detection Competitions with Linear Weight Classification
Todd P. Huster
Peter Lin
Razvan Stefanescu
E. Ekwedike
R. Chadha
AAML
21
0
0
05 Nov 2024
Expose Before You Defend: Unifying and Enhancing Backdoor Defenses via Exposed Models
Yige Li
Hanxun Huang
Jiaming Zhang
Xingjun Ma
Yu-Gang Jiang
AAML
28
2
0
25 Oct 2024
Backdoored Retrievers for Prompt Injection Attacks on Retrieval Augmented Generation of Large Language Models
Cody Clop
Yannick Teglia
AAML
SILM
RALM
40
2
0
18 Oct 2024
Persistent Backdoor Attacks in Continual Learning
Zhen Guo
Abhinav Kumar
R. Tourani
AAML
18
3
0
20 Sep 2024
A Practical Trigger-Free Backdoor Attack on Neural Networks
Jiahao Wang
Xianglong Zhang
Xiuzhen Cheng
Pengfei Hu
Guoming Zhang
AAML
34
0
0
21 Aug 2024
DeepBaR: Fault Backdoor Attack on Deep Neural Network Layers
Camilo A. Mart´ınez-Mej´ıa
Jesus Solano
J. Breier
Dominik Bucko
Xiaolu Hou
AAML
17
0
0
30 Jul 2024
Flatness-aware Sequential Learning Generates Resilient Backdoors
Hoang Pham
The-Anh Ta
Anh Tran
Khoa D. Doan
FedML
AAML
26
0
0
20 Jul 2024
Magnitude-based Neuron Pruning for Backdoor Defens
Nan Li
Haoyu Jiang
Ping Yi
AAML
19
1
0
28 May 2024
Rethinking Pruning for Backdoor Mitigation: An Optimization Perspective
Nan Li
Haiyang Yu
Ping Yi
AAML
21
0
0
28 May 2024
BadFusion: 2D-Oriented Backdoor Attacks against 3D Object Detection
Saket S. Chaturvedi
Lan Zhang
Wenbin Zhang
Pan He
Xiaoyong Yuan
3DPC
45
0
0
06 May 2024
BadEdit: Backdooring large language models by model editing
Yanzhou Li
Tianlin Li
Kangjie Chen
Jian Zhang
Shangqing Liu
Wenhan Wang
Tianwei Zhang
Yang Liu
SyDa
AAML
KELM
51
50
0
20 Mar 2024
Backdoor Secrets Unveiled: Identifying Backdoor Data with Optimized Scaled Prediction Consistency
Soumyadeep Pal
Yuguang Yao
Ren Wang
Bingquan Shen
Sijia Liu
AAML
34
8
0
15 Mar 2024
Test-Time Backdoor Attacks on Multimodal Large Language Models
Dong Lu
Tianyu Pang
Chao Du
Qian Liu
Xianjun Yang
Min-Bin Lin
AAML
51
21
0
13 Feb 2024
End-to-End Anti-Backdoor Learning on Images and Time Series
Yujing Jiang
Xingjun Ma
S. Erfani
Yige Li
James Bailey
23
1
0
06 Jan 2024
DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models
Jiachen Zhou
Peizhuo Lv
Yibing Lan
Guozhu Meng
Kai Chen
Hualong Ma
AAML
21
7
0
18 Dec 2023
Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey
Yichen Wan
Youyang Qu
Wei Ni
Yong Xiang
Longxiang Gao
Ekram Hossain
AAML
42
33
0
14 Dec 2023
Who Leaked the Model? Tracking IP Infringers in Accountable Federated Learning
Shuyang Yu
Junyuan Hong
Yi Zeng
Fei Wang
Ruoxi Jia
Jiayu Zhou
FedML
22
9
0
06 Dec 2023
Investigating Weight-Perturbed Deep Neural Networks With Application in Iris Presentation Attack Detection
Renu Sharma
Redwan Sony
Arun Ross
AAML
11
3
0
21 Nov 2023
Backdoor Attacks and Countermeasures in Natural Language Processing Models: A Comprehensive Security Review
Pengzhou Cheng
Zongru Wu
Wei Du
Haodong Zhao
Wei Lu
Gongshen Liu
SILM
AAML
18
16
0
12 Sep 2023
Safe and Robust Watermark Injection with a Single OoD Image
Shuyang Yu
Junyuan Hong
Haobo Zhang
Haotao Wang
Zhangyang Wang
Jiayu Zhou
WIGM
23
3
0
04 Sep 2023
ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP
Lu Yan
Zhuo Zhang
Guanhong Tao
Kaiyuan Zhang
Xuan Chen
Guangyu Shen
Xiangyu Zhang
AAML
SILM
46
16
0
04 Aug 2023
OVLA: Neural Network Ownership Verification using Latent Watermarks
Feisi Fu
Wenchao Li
AAML
19
1
0
15 Jun 2023
Reconstructive Neuron Pruning for Backdoor Defense
Yige Li
X. Lyu
Xingjun Ma
Nodens Koren
Lingjuan Lyu
Bo-wen Li
Yugang Jiang
AAML
12
41
0
24 May 2023
Decision-based iterative fragile watermarking for model integrity verification
Z. Yin
Heng Yin
Hang Su
Xinpeng Zhang
Zhenzhe Gao
AAML
13
3
0
13 May 2023
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective
Baoyuan Wu
Zihao Zhu
Li Liu
Qingshan Liu
Zhaofeng He
Siwei Lyu
AAML
44
21
0
19 Feb 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
16
20
0
14 Feb 2023
Distilling Cognitive Backdoor Patterns within an Image
Hanxun Huang
Xingjun Ma
S. Erfani
James Bailey
AAML
11
24
0
26 Jan 2023
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models
Rui Zhu
Di Tang
Siyuan Tang
XiaoFeng Wang
Haixu Tang
AAML
FedML
19
13
0
09 Dec 2022
Backdoor Attacks on Time Series: A Generative Approach
Yujing Jiang
Xingjun Ma
S. Erfani
James Bailey
AAML
AI4TS
17
12
0
15 Nov 2022
Dormant Neural Trojans
Feisi Fu
Panagiota Kiourti
Wenchao Li
AAML
11
0
0
02 Nov 2022
GA-SAM: Gradient-Strength based Adaptive Sharpness-Aware Minimization for Improved Generalization
Zhiyuan Zhang
Ruixuan Luo
Qi Su
Xueting Sun
19
11
0
13 Oct 2022
Dim-Krum: Backdoor-Resistant Federated Learning for NLP with Dimension-wise Krum-Based Aggregation
Zhiyuan Zhang
Qi Su
Xu Sun
FedML
13
12
0
13 Oct 2022
Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork
Haotao Wang
Junyuan Hong
Aston Zhang
Jiayu Zhou
Zhangyang Wang
AAML
23
12
0
12 Oct 2022
Attention Hijacking in Trojan Transformers
Weimin Lyu
Songzhu Zheng
Teng Ma
Haibin Ling
Chao Chen
27
6
0
09 Aug 2022
A Study of the Attention Abnormality in Trojaned BERTs
Weimin Lyu
Songzhu Zheng
Teng Ma
Chao Chen
51
55
0
13 May 2022
Towards A Critical Evaluation of Robustness for Deep Learning Backdoor Countermeasures
Huming Qiu
Hua Ma
Zhi-Li Zhang
A. Abuadbba
Wei Kang
Anmin Fu
Yansong Gao
ELM
AAML
11
15
0
13 Apr 2022
Adversarial Robustness of Neural-Statistical Features in Detection of Generative Transformers
Evan Crothers
Nathalie Japkowicz
H. Viktor
Paula Branco
AAML
DeLMO
11
27
0
02 Mar 2022
Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes
Sanghyun Hong
Michael-Andrei Panaitescu-Liess
Yigitcan Kaya
Tudor Dumitras
MQ
42
13
0
26 Oct 2021
RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models
Wenkai Yang
Yankai Lin
Peng Li
Jie Zhou
Xu Sun
SILM
AAML
17
102
0
15 Oct 2021
Don't Knock! Rowhammer at the Backdoor of DNN Models
M. Tol
Saad Islam
Andrew J. Adiletta
B. Sunar
Ziming Zhang
AAML
17
15
0
14 Oct 2021
Adversarial Unlearning of Backdoors via Implicit Hypergradient
Yi Zeng
Si-An Chen
Won Park
Z. Morley Mao
Ming Jin
R. Jia
AAML
20
172
0
07 Oct 2021
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models
Kangjie Chen
Yuxian Meng
Xiaofei Sun
Shangwei Guo
Tianwei Zhang
Jiwei Li
Chun Fan
SILM
13
105
0
06 Oct 2021
How to Inject Backdoors with Better Consistency: Logit Anchoring on Clean Data
Zhiyuan Zhang
Lingjuan Lyu
Weiqiang Wang
Lichao Sun
Xu Sun
11
34
0
03 Sep 2021
Quantization Backdoors to Deep Learning Commercial Frameworks
Hua Ma
Huming Qiu
Yansong Gao
Zhi-Li Zhang
A. Abuadbba
Minhui Xue
Anmin Fu
Jiliang Zhang
S. Al-Sarawi
Derek Abbott
MQ
20
19
0
20 Aug 2021
Accumulative Poisoning Attacks on Real-time Data
Tianyu Pang
Xiao Yang
Yinpeng Dong
Hang Su
Jun Zhu
24
20
0
18 Jun 2021
Handcrafted Backdoors in Deep Neural Networks
Sanghyun Hong
Nicholas Carlini
Alexey Kurakin
11
71
0
08 Jun 2021
Defending Against Backdoor Attacks in Natural Language Generation
Xiaofei Sun
Xiaoya Li
Yuxian Meng
Xiang Ao
Fei Wu
Jiwei Li
Tianwei Zhang
AAML
SILM
11
47
0
03 Jun 2021
GAL: Gradient Assisted Learning for Decentralized Multi-Organization Collaborations
Enmao Diao
Jie Ding
Vahid Tarokh
FedML
17
16
0
02 Jun 2021
Robust Backdoor Attacks against Deep Neural Networks in Real Physical World
Mingfu Xue
Can He
Shichang Sun
Jian Wang
Weiqiang Liu
AAML
16
43
0
15 Apr 2021
1
2
Next