Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2011.09527
Cited By
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020
18 November 2020
Eitan Borgnia
Valeriia Cherepanova
Liam H. Fowl
Amin Ghiasi
Jonas Geiping
Micah Goldblum
Tom Goldstein
Arjun Gupta
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff"
50 / 88 papers shown
PDLRecover: Privacy-preserving Decentralized Model Recovery with Machine Unlearning
Xiangman Li
Xiaodong Wu
Jianbing Ni
Mohamed Mahmoud
Maazen Alsabaan
AAML
211
0
0
18 Jun 2025
Prototype Guided Backdoor Defense
Venkat Adithya Amula
Sunayana Samavedam
Saurabh Saini
Avani Gupta
Narayanan P J
AAML
352
1
0
26 Mar 2025
A Robust Attack: Displacement Backdoor Attack
Yong Li
Han Gao
AAML
309
0
0
14 Feb 2025
Adversarial Hubness in Multi-Modal Retrieval
Tingwei Zhang
Fnu Suya
Rishi Jha
Collin Zhang
Vitaly Shmatikov
AAML
662
8
0
18 Dec 2024
Active Poisoning: Efficient Backdoor Attacks on Transfer Learning-Based Brain-Computer Interfaces
Science China Information Sciences (Sci China Inf Sci), 2023
X. Jiang
L. Meng
S. Li
D. Wu
AAML
347
9
0
13 Dec 2024
SoK: A Systems Perspective on Compound AI Threats and Countermeasures
Sarbartha Banerjee
Prateek Sahu
Mulong Luo
Anjo Vahldiek-Oberwagner
N. Yadwadkar
Mohit Tiwari
AAML
394
3
0
20 Nov 2024
Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations
Lu Pang
Tao Sun
Weimin Lyu
Haibin Ling
Chong Chen
AAML
245
1
0
16 Oct 2024
Using Interleaved Ensemble Unlearning to Keep Backdoors at Bay for Finetuning Vision Transformers
Zeyu Michael Li
AAML
325
1
0
01 Oct 2024
Psychometrics for Hypnopaedia-Aware Machinery via Chaotic Projection of Artificial Mental Imagery
Ching-Chun Chang
Kai Gao
Shuying Xu
Anastasia Kordoni
Christopher Leckie
Isao Echizen
197
0
0
29 Sep 2024
Trustworthy Text-to-Image Diffusion Models: A Timely and Focused Survey
Yi Zhang
Zhen Chen
Chih-Hong Cheng
Wenjie Ruan
Xiaowei Huang
Dezong Zhao
David Flynn
Siddartha Khastgir
Xingyu Zhao
MedIm
575
7
0
26 Sep 2024
The poison of dimensionality
Lê-Nguyên Hoang
381
3
0
25 Sep 2024
Data Poisoning and Leakage Analysis in Federated Learning
Wenqi Wei
Tiansheng Huang
Zachary Yahn
Anoop Singhal
Margaret Loper
Ling Liu
FedML
SILM
307
2
0
19 Sep 2024
Fisher Information guided Purification against Backdoor Attacks
Conference on Computer and Communications Security (CCS), 2024
Nazmul Karim
Abdullah Al Arafat
Adnan Siraj Rakin
Zhishan Guo
Nazanin Rahnavard
AAML
388
5
0
01 Sep 2024
A Practical Trigger-Free Backdoor Attack on Neural Networks
Jiahao Wang
Xianglong Zhang
Xiuzhen Cheng
Pengfei Hu
Guoming Zhang
AAML
249
1
0
21 Aug 2024
Securing Voice Authentication Applications Against Targeted Data Poisoning
Alireza Mohammadi
Keshav Sood
D. Thiruvady
A. Nazari
AAML
226
3
0
25 Jun 2024
ECLIPSE: Expunging Clean-label Indiscriminate Poisons via Sparse Diffusion Purification
Xianlong Wang
Shengshan Hu
Yechao Zhang
Ziqi Zhou
Leo Yu Zhang
Peng Xu
Wei Wan
Hai Jin
AAML
416
6
0
21 Jun 2024
A Survey on Machine Unlearning: Techniques and New Emerged Privacy Risks
Journal of Information Security and Applications (JISA), 2024
Hengzhu Liu
Ping Xiong
Tianqing Zhu
Philip S. Yu
272
25
0
10 Jun 2024
PSBD: Prediction Shift Uncertainty Unlocks Backdoor Detection
Computer Vision and Pattern Recognition (CVPR), 2024
Wei Li
Pin-Yu Chen
Sijia Liu
Ren Wang
AAML
404
5
0
09 Jun 2024
Generalization Bound and New Algorithm for Clean-Label Backdoor Attack
Lijia Yu
Shuang Liu
Yibo Miao
Xiao-Shan Gao
Lijun Zhang
AAML
375
11
0
02 Jun 2024
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models
Omead Brandon Pooladzandi
Jeffrey Q. Jiang
Sunay Bhat
Gregory Pottie
AAML
363
0
0
28 May 2024
PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model Dynamics
Sunay Bhat
Jeffrey Q. Jiang
Omead Brandon Pooladzandi
Alexander Branch
Gregory Pottie
AAML
420
4
0
28 May 2024
Partial train and isolate, mitigate backdoor attack
Yong Li
Han Gao
AAML
347
0
0
26 May 2024
CloudFort: Enhancing Robustness of 3D Point Cloud Classification Against Backdoor Attacks via Spatial Partitioning and Ensemble Prediction
Wenhao Lan
Yijun Yang
Haihua Shen
Sha Li
3DPC
302
1
0
22 Apr 2024
The Victim and The Beneficiary: Exploiting a Poisoned Model to Train a Clean Model on Poisoned Data
Zixuan Zhu
Rui Wang
Cong Zou
Lihua Jing
AAML
FedML
377
6
0
17 Apr 2024
LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
Shuyang Cheng
Guanhong Tao
Yingqi Liu
Guangyu Shen
Shengwei An
Shiwei Feng
Xiangzhe Xu
Kaiyuan Zhang
Shiqing Ma
Xiangyu Zhang
AAML
276
16
0
25 Mar 2024
Have You Poisoned My Data? Defending Neural Networks against Data Poisoning
Fabio De Gaspari
Dorjan Hitaj
Luigi V. Mancini
AAML
TDI
219
11
0
20 Mar 2024
Backdoor Secrets Unveiled: Identifying Backdoor Data with Optimized Scaled Prediction Consistency
Soumyadeep Pal
Yuguang Yao
Ren Wang
Bingquan Shen
Sijia Liu
AAML
285
15
0
15 Mar 2024
Immunization against harmful fine-tuning attacks
Domenic Rosati
Jan Wehner
Kai Williams
Lukasz Bartoszcze
Jan Batzner
Hassan Sajjad
Frank Rudzicz
AAML
347
35
0
26 Feb 2024
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Wenqi Wei
Ling Liu
441
58
0
02 Feb 2024
Manipulating Trajectory Prediction with Backdoors
Kaouther Messaoud
Kathrin Grosse
Mickaël Chen
Matthieu Cord
Patrick Pérez
Alexandre Alahi
AAML
LLMSV
250
1
0
21 Dec 2023
SoK: Unintended Interactions among Machine Learning Defenses and Risks
Vasisht Duddu
S. Szyller
Nadarajah Asokan
AAML
422
6
0
07 Dec 2023
Corrupting Convolution-based Unlearnable Datasets with Pixel-based Image Transformations
Xianlong Wang
Shengshan Hu
Minghui Li
Zhifei Yu
Ziqi Zhou
Leo Yu Zhang
AAML
312
6
0
30 Nov 2023
Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift
AAAI Conference on Artificial Intelligence (AAAI), 2023
Shengwei An
Sheng-Yen Chou
Kaiyuan Zhang
Qiuling Xu
Guanhong Tao
...
Shuyang Cheng
Shiqing Ma
Pin-Yu Chen
Tsung-Yi Ho
Xiangyu Zhang
DiffM
AAML
503
47
0
27 Nov 2023
Effective Backdoor Mitigation in Vision-Language Models Depends on the Pre-training Objective
Sahil Verma
Gantavya Bhatt
Avi Schwarzschild
Soumye Singhal
Arnav M. Das
Chirag Shah
John P Dickerson
Jeff Bilmes
J. Bilmes
AAML
368
1
0
25 Nov 2023
Does Differential Privacy Prevent Backdoor Attacks in Practice?
Database Security (DBSec), 2023
Fereshteh Razmi
Jian Lou
Li Xiong
AAML
118
1
0
10 Nov 2023
Toward Robust Recommendation via Real-time Vicinal Defense
Yichang Xu
Chenwang Wu
Defu Lian
AAML
213
0
0
29 Sep 2023
HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning Attacks
Industrial Conference on Data Mining (IDM), 2023
Minh-Hao Van
Alycia N. Carey
Xintao Wu
TDI
AAML
318
3
0
15 Sep 2023
Backdoor Attacks against Voice Recognition Systems: A Survey
ACM Computing Surveys (ACM Comput. Surv.), 2023
Baochen Yan
Jiahe Lan
Zheng Yan
AAML
233
21
0
23 Jul 2023
Efficient Backdoor Removal Through Natural Gradient Fine-tuning
Nazmul Karim
Abdullah Al Arafat
Umar Khalid
Zhishan Guo
Naznin Rahnavard
AAML
201
1
0
30 Jun 2023
Exploring Model Dynamics for Accumulative Poisoning Discovery
International Conference on Machine Learning (ICML), 2023
Jianing Zhu
Xiawei Guo
Jiangchao Yao
Chao Du
Li He
Shuo Yuan
Tongliang Liu
Liang Wang
Bo Han
AAML
234
0
0
06 Jun 2023
Mitigating Backdoor Attack Via Prerequisite Transformation
Han Gao
AAML
162
0
0
03 Jun 2023
Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Computer Vision and Pattern Recognition (CVPR), 2023
Ajinkya Tejankar
Maziar Sanjabi
Qifan Wang
Sinong Wang
Hamed Firooz
Hamed Pirsiavash
L Tan
AAML
251
35
0
04 Apr 2023
Detecting Backdoors in Pre-trained Encoders
Computer Vision and Pattern Recognition (CVPR), 2023
Shiwei Feng
Guanhong Tao
Shuyang Cheng
Guangyu Shen
Xiangzhe Xu
Yingqi Liu
Kaiyuan Zhang
Shiqing Ma
Xiangyu Zhang
396
88
0
23 Mar 2023
Backdoor Defense via Adaptively Splitting Poisoned Dataset
Computer Vision and Pattern Recognition (CVPR), 2023
Kuofeng Gao
Yang Bai
Jindong Gu
Yong-Liang Yang
Shutao Xia
AAML
268
81
0
23 Mar 2023
Robust Contrastive Language-Image Pre-training against Data Poisoning and Backdoor Attacks
Neural Information Processing Systems (NeurIPS), 2023
Wenhan Yang
Jingdong Gao
Baharan Mirzasoleiman
VLM
403
38
0
13 Mar 2023
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning
IEEE International Conference on Computer Vision (ICCV), 2023
Hritik Bansal
Nishad Singhi
Yu Yang
Fan Yin
Aditya Grover
Kai-Wei Chang
AAML
473
74
0
06 Mar 2023
Run-Off Election: Improved Provable Defense against Data Poisoning Attacks
International Conference on Machine Learning (ICML), 2023
Keivan Rezaei
Kiarash Banihashem
Atoosa Malemir Chegini
Soheil Feizi
AAML
527
21
0
05 Feb 2023
BackdoorBox: A Python Toolbox for Backdoor Learning
Yiming Li
Mengxi Ya
Yang Bai
Yong Jiang
Shutao Xia
AAML
322
54
0
01 Feb 2023
Distilling Cognitive Backdoor Patterns within an Image
International Conference on Learning Representations (ICLR), 2023
Hanxun Huang
Jiabo He
S. Erfani
James Bailey
AAML
482
36
0
26 Jan 2023
Towards Understanding How Self-training Tolerates Data Backdoor Poisoning
Soumyadeep Pal
Ren Wang
Yuguang Yao
Sijia Liu
239
7
0
20 Jan 2023
1
2
Next
Page 1 of 2