Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2009.02276
Cited By
v1
v2 (latest)
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
International Conference on Learning Representations (ICLR), 2020
4 September 2020
Jonas Geiping
Liam H. Fowl
Wenjie Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching"
48 / 148 papers shown
MOVE: Effective and Harmless Ownership Verification via Embedded External Features
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022
Yiming Li
Linghui Zhu
Yang Liu
Yang Bai
Yong Jiang
Shutao Xia
Xiaochun Cao
Kui Ren
AAML
288
23
0
04 Aug 2022
Autoregressive Perturbations for Data Poisoning
Neural Information Processing Systems (NeurIPS), 2022
Pedro Sandoval-Segura
Vasu Singla
Jonas Geiping
Micah Goldblum
Tom Goldstein
David Jacobs
AAML
344
51
0
08 Jun 2022
BagFlip: A Certified Defense against Data Poisoning
Neural Information Processing Systems (NeurIPS), 2022
Yuhao Zhang
Aws Albarghouthi
Loris Dántoni
AAML
217
27
0
26 May 2022
On Collective Robustness of Bagging Against Data Poisoning
International Conference on Machine Learning (ICML), 2022
Ruoxin Chen
Zenan Li
Jie Li
Chentao Wu
Junchi Yan
203
24
0
26 May 2022
One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks
International Conference on Learning Representations (ICLR), 2022
Shutong Wu
Sizhe Chen
Cihang Xie
Xiaolin Huang
AAML
218
35
0
24 May 2022
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning
Harsh Chaudhari
Matthew Jagielski
Alina Oprea
229
7
0
20 May 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
USENIX Security Symposium (USENIX Security), 2022
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
276
41
0
13 May 2022
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
ACM Computing Surveys (ACM CSUR), 2022
Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Sebastiano Vascon
Werner Zellinger
Bernhard A. Moser
Alina Oprea
Battista Biggio
Marcello Pelillo
Fabio Roli
AAML
398
170
0
04 May 2022
Indiscriminate Data Poisoning Attacks on Neural Networks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
286
30
0
19 Apr 2022
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Conference on Computer and Communications Security (CCS), 2022
Florian Tramèr
Reza Shokri
Ayrton San Joaquin
Hoang Minh Le
Matthew Jagielski
Sanghyun Hong
Nicholas Carlini
MIACV
380
136
0
31 Mar 2022
Robust Unlearnable Examples: Protecting Data Against Adversarial Learning
Shaopeng Fu
Fengxiang He
Yang Liu
Li Shen
Dacheng Tao
159
36
0
28 Mar 2022
WaveFuzz: A Clean-Label Poisoning Attack to Protect Your Voice
Yunjie Ge
Qianqian Wang
Jingfeng Zhang
Juntao Zhou
Yunzhu Zhang
Chao Shen
AAML
224
8
0
25 Mar 2022
Energy-Latency Attacks via Sponge Poisoning
Information Sciences (Inf. Sci.), 2022
Antonio Emanuele Cinà
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
SILM
564
33
0
14 Mar 2022
Robustly-reliable learners under poisoning attacks
Annual Conference Computational Learning Theory (COLT), 2022
Maria-Florina Balcan
Avrim Blum
Steve Hanneke
Dravyansh Sharma
AAML
OOD
183
16
0
08 Mar 2022
On the Effectiveness of Adversarial Training against Backdoor Attacks
IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2022
Yinghua Gao
Dongxian Wu
Jingfeng Zhang
Guanhao Gan
Shutao Xia
Gang Niu
Masashi Sugiyama
AAML
185
30
0
22 Feb 2022
Resurrecting Trust in Facial Recognition: Mitigating Backdoor Attacks in Face Recognition to Prevent Potential Privacy Breaches
Reena Zelenkova
J. Swallow
Pathum Chamikara Mahawaga Arachchige
Dongxi Liu
Mohan Baruwal Chhetri
S. Çamtepe
M. Grobler
Mahathir Almashor
AAML
111
2
0
18 Feb 2022
An Equivalence Between Data Poisoning and Byzantine Gradient Attacks
International Conference on Machine Learning (ICML), 2022
Sadegh Farhadkhani
R. Guerraoui
L. Hoang
Oscar Villemaud
FedML
221
29
0
17 Feb 2022
Holistic Adversarial Robustness of Deep Learning Models
AAAI Conference on Artificial Intelligence (AAAI), 2022
Pin-Yu Chen
Sijia Liu
AAML
381
22
0
15 Feb 2022
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation
International Conference on Machine Learning (ICML), 2022
Wenxiao Wang
Alexander Levine
Soheil Feizi
AAML
217
65
0
05 Feb 2022
Learnability Lock: Authorized Learnability Control Through Adversarial Invertible Transformations
International Conference on Learning Representations (ICLR), 2022
Weiqi Peng
Jinghui Chen
AAML
133
5
0
03 Feb 2022
Can Adversarial Training Be Manipulated By Non-Robust Features?
Neural Information Processing Systems (NeurIPS), 2022
Lue Tao
Lei Feng
Jianguo Huang
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
724
17
0
31 Jan 2022
Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation
Conference on Computer and Communications Security (CCS), 2022
Zayd Hammoudeh
Daniel Lowd
TDI
272
37
0
25 Jan 2022
Execute Order 66: Targeted Data Poisoning for Reinforcement Learning
Harrison Foley
Liam H. Fowl
Tom Goldstein
Gavin Taylor
AAML
199
10
0
03 Jan 2022
Defending against Model Stealing via Verifying Embedded External Features
AAAI Conference on Artificial Intelligence (AAAI), 2021
Yiming Li
Linghui Zhu
Yang Liu
Yong Jiang
Shutao Xia
Xiaochun Cao
AAML
236
80
0
07 Dec 2021
Availability Attacks Create Shortcuts
Knowledge Discovery and Data Mining (KDD), 2021
Da Yu
Huishuai Zhang
Wei Chen
Jian Yin
Tie-Yan Liu
AAML
258
68
0
01 Nov 2021
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
310
61
0
13 Oct 2021
Adversarial Examples Make Strong Poisons
Neural Information Processing Systems (NeurIPS), 2021
Liam H. Fowl
Micah Goldblum
Ping Yeh-Chiang
Jonas Geiping
Wojtek Czaja
Tom Goldstein
SILM
281
155
0
21 Jun 2021
Accumulative Poisoning Attacks on Real-time Data
Neural Information Processing Systems (NeurIPS), 2021
Tianyu Pang
Xiao Yang
Yinpeng Dong
Hang Su
Jun Zhu
233
22
0
18 Jun 2021
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Hossein Souri
Liam H. Fowl
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
SILM
431
150
0
16 Jun 2021
Disrupting Model Training with Adversarial Shortcuts
Ivan Evtimov
Ian Covert
Aditya Kusupati
Tadayoshi Kohno
AAML
193
10
0
12 Jun 2021
Defending Against Backdoor Attacks in Natural Language Generation
AAAI Conference on Artificial Intelligence (AAAI), 2021
Xiaofei Sun
Xiaoya Li
Yuxian Meng
Xiang Ao
Leilei Gan
Jiwei Li
Tianwei Zhang
AAML
SILM
278
59
0
03 Jun 2021
GAL: Gradient Assisted Learning for Decentralized Multi-Organization Collaborations
Neural Information Processing Systems (NeurIPS), 2021
Enmao Diao
Jie Ding
Vahid Tarokh
FedML
359
17
0
02 Jun 2021
Regularization Can Help Mitigate Poisoning Attacks... with the Right Hyperparameters
Javier Carnerero-Cano
Luis Muñoz-González
P. Spencer
Emil C. Lupu
AAML
179
11
0
23 May 2021
Incompatibility Clustering as a Defense Against Backdoor Poisoning Attacks
International Conference on Learning Representations (ICLR), 2021
Charles Jin
Melinda Sun
Martin Rinard
AAML
238
7
0
08 May 2021
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?
IEEE International Joint Conference on Neural Network (IJCNN), 2021
Antonio Emanuele Cinà
Sebastiano Vascon
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
AAML
165
13
0
23 Mar 2021
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations
Eitan Borgnia
Jonas Geiping
Valeriia Cherepanova
Liam H. Fowl
Arjun Gupta
Amin Ghiasi
Furong Huang
Micah Goldblum
Tom Goldstein
263
49
0
02 Mar 2021
What Doesn't Kill You Makes You Robust(er): How to Adversarially Train against Data Poisoning
Jonas Geiping
Liam H. Fowl
Gowthami Somepalli
Micah Goldblum
Michael Moeller
Tom Goldstein
TDI
AAML
SILM
190
46
0
26 Feb 2021
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release
Liam H. Fowl
Ping Yeh-Chiang
Micah Goldblum
Jonas Geiping
Arpit Bansal
W. Czaja
Tom Goldstein
200
45
0
16 Feb 2021
Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training
Neural Information Processing Systems (NeurIPS), 2021
Lue Tao
Lei Feng
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
472
82
0
09 Feb 2021
With False Friends Like These, Who Can Notice Mistakes?
AAAI Conference on Artificial Intelligence (AAAI), 2020
Lue Tao
Lei Feng
Jinfeng Yi
Songcan Chen
AAML
371
6
0
29 Dec 2020
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
Basel Alomair
Aleksander Madry
Yue Liu
Tom Goldstein
SILM
483
349
0
18 Dec 2020
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020
Eitan Borgnia
Valeriia Cherepanova
Liam H. Fowl
Amin Ghiasi
Jonas Geiping
Micah Goldblum
Tom Goldstein
Arjun Gupta
AAML
216
141
0
18 Nov 2020
VenoMave: Targeted Poisoning Against Speech Recognition
H. Aghakhani
Lea Schonherr
Thorsten Eisenhofer
D. Kolossa
Thorsten Holz
Christopher Kruegel
Giovanni Vigna
AAML
289
20
0
21 Oct 2020
Backdoor Learning: A Survey
IEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), 2020
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
573
739
0
17 Jul 2020
Subpopulation Data Poisoning Attacks
Conference on Computer and Communications Security (CCS), 2020
Matthew Jagielski
Giorgio Severi
Niklas Pousette Harger
Alina Oprea
AAML
SILM
244
138
0
24 Jun 2020
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
International Conference on Machine Learning (ICML), 2020
Avi Schwarzschild
Micah Goldblum
Arjun Gupta
John P. Dickerson
Tom Goldstein
AAML
TDI
310
190
0
22 Jun 2020
Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability
European Symposium on Security and Privacy (EuroS&P), 2020
H. Aghakhani
Dongyu Meng
Yu-Xiang Wang
Christopher Kruegel
Giovanni Vigna
AAML
315
122
0
01 May 2020
A Separation Result Between Data-oblivious and Data-aware Poisoning Attacks
Neural Information Processing Systems (NeurIPS), 2020
Samuel Deng
Sanjam Garg
S. Jha
Saeed Mahloujifar
Mohammad Mahmoody
Abhradeep Thakurta
176
3
0
26 Mar 2020
Previous
1
2
3