Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2009.02276
Cited By
v1
v2 (latest)
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
International Conference on Learning Representations (ICLR), 2020
4 September 2020
Jonas Geiping
Liam H. Fowl
Wenjie Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching"
50 / 148 papers shown
Online Algorithmic Recourse by Collective Action
Elliot Creager
Richard Zemel
178
5
0
29 Dec 2023
UltraClean: A Simple Framework to Train Robust Neural Networks against Backdoor Attacks
Bingyin Zhao
Yingjie Lao
AAML
299
2
0
17 Dec 2023
Mendata: A Framework to Purify Manipulated Training Data
Zonghao Huang
Neil Zhenqiang Gong
Michael K. Reiter
246
0
0
03 Dec 2023
Stable Unlearnable Example: Enhancing the Robustness of Unlearnable Examples via Stable Error-Minimizing Noise
AAAI Conference on Artificial Intelligence (AAAI), 2023
Yixin Liu
Kaidi Xu
Xun Chen
Lichao Sun
240
15
0
22 Nov 2023
BrainWash: A Poisoning Attack to Forget in Continual Learning
Computer Vision and Pattern Recognition (CVPR), 2023
Ali Abbasi
Parsa Nooralinejad
Hamed Pirsiavash
Soheil Kolouri
CLL
KELM
AAML
375
7
0
20 Nov 2023
Understanding Variation in Subpopulation Susceptibility to Poisoning Attacks
Evan Rose
Fnu Suya
David Evans
AAML
106
1
0
20 Nov 2023
PACOL: Poisoning Attacks Against Continual Learners
Huayu Li
G. Ditzler
AAML
159
3
0
18 Nov 2023
Towards more Practical Threat Models in Artificial Intelligence Security
Kathrin Grosse
L. Bieringer
Tarek R. Besold
Alexandre Alahi
388
21
0
16 Nov 2023
Label Poisoning is All You Need
Neural Information Processing Systems (NeurIPS), 2023
Rishi Jha
J. Hayase
Sewoong Oh
AAML
259
43
0
29 Oct 2023
AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors
You-Ming Chang
Chen Yeh
Wei-Chen Chiu
Ning Yu
VPVLM
VLM
405
51
0
26 Oct 2023
Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm
Scientific Reports (Sci Rep), 2023
S. M. Fazle
J. Mondal
Meem Arafat Manab
Xi Xiao
Sarfaraz Newaz
AAML
449
2
0
18 Oct 2023
Prompt Backdoors in Visual Prompt Learning
Hai Huang
Subrat Kishore Dutta
Michael Backes
Yun Shen
Yang Zhang
VLM
VPVLM
AAML
SILM
195
3
0
11 Oct 2023
Domain Watermark: Effective and Harmless Dataset Copyright Protection is Closed at Hand
Neural Information Processing Systems (NeurIPS), 2023
Junfeng Guo
Yiming Li
Lixu Wang
Shu-Tao Xia
Heng-Chiao Huang
Cong Liu
Boheng Li
329
83
0
09 Oct 2023
Towards Poisoning Fair Representations
International Conference on Learning Representations (ICLR), 2023
Tianci Liu
Haoyu Wang
Feijie Wu
Hengtong Zhang
Pan Li
Lu Su
Jing Gao
AAML
193
3
0
28 Sep 2023
Seeing Is Not Always Believing: Invisible Collision Attack and Defence on Pre-Trained Models
Minghan Deng
Zhong Zhang
Junming Shao
AAML
173
0
0
24 Sep 2023
HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning Attacks
Industrial Conference on Data Mining (IDM), 2023
Minh-Hao Van
Alycia N. Carey
Xintao Wu
TDI
AAML
216
3
0
15 Sep 2023
Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning System
Peixin Zhang
Jun Sun
Mingtian Tan
Xinyu Wang
AAML
297
8
0
12 Sep 2023
Dropout Attacks
IEEE Symposium on Security and Privacy (IEEE S&P), 2023
Andrew Yuan
Alina Oprea
Cheng Tan
216
2
0
04 Sep 2023
Adversarial Deep Reinforcement Learning for Cyber Security in Software Defined Networks
Luke Borchjes
Clement N. Nyirenda
L. Leenen
AAML
169
2
0
09 Aug 2023
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?
Neural Information Processing Systems (NeurIPS), 2023
Fnu Suya
X. Zhang
Yuan Tian
David Evans
OOD
AAML
251
3
0
03 Jul 2023
On Practical Aspects of Aggregation Defenses against Data Poisoning Attacks
Wenxiao Wang
Soheil Feizi
AAML
194
1
0
28 Jun 2023
On the Exploitability of Instruction Tuning
Neural Information Processing Systems (NeurIPS), 2023
Manli Shu
Zhenghao Hu
Chen Zhu
Jonas Geiping
Chaowei Xiao
Tom Goldstein
SILM
377
129
0
28 Jun 2023
Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization
IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2023
Javier Carnerero-Cano
Luis Muñoz-González
P. Spencer
Emil C. Lupu
AAML
227
6
0
02 Jun 2023
Differentially-Private Decision Trees and Provable Robustness to Data Poisoning
D. Vos
Jelle Vos
Tianyu Li
Z. Erkin
S. Verwer
FedML
211
2
0
24 May 2023
Sharpness-Aware Data Poisoning Attack
International Conference on Learning Representations (ICLR), 2023
Pengfei He
Han Xu
Jie Ren
Yingqian Cui
Hui Liu
Charu C. Aggarwal
Shucheng Zhou
AAML
413
8
0
24 May 2023
Assessing Vulnerabilities of Adversarial Learning Algorithm through Poisoning Attacks
Jingfeng Zhang
Bo Song
Bo Han
Lei Liu
Gang Niu
Masashi Sugiyama
AAML
160
2
0
30 Apr 2023
BadVFL: Backdoor Attacks in Vertical Federated Learning
IEEE Symposium on Security and Privacy (IEEE S&P), 2023
Mohammad Naseri
Yufei Han
Emiliano De Cristofaro
FedML
AAML
229
22
0
18 Apr 2023
Robust Contrastive Language-Image Pre-training against Data Poisoning and Backdoor Attacks
Neural Information Processing Systems (NeurIPS), 2023
Wenhan Yang
Jingdong Gao
Baharan Mirzasoleiman
VLM
344
28
0
13 Mar 2023
Adversarial Sampling for Fairness Testing in Deep Neural Network
International Journal of Advanced Computer Science and Applications (IJACSA), 2023
Tosin Ige
William Marfo
Justin Tonkinson
Sikiru Adewale
Bolanle Hafiz Matti
OOD
127
11
0
06 Mar 2023
Randomized Kaczmarz in Adversarial Distributed Setting
SIAM Journal on Scientific Computing (SISC), 2023
Longxiu Huang
Xia Li
Deanna Needell
231
4
0
24 Feb 2023
Poisoning Web-Scale Training Datasets is Practical
IEEE Symposium on Security and Privacy (IEEE S&P), 2023
Nicholas Carlini
Matthew Jagielski
Christopher A. Choquette-Choo
Daniel Paleka
Will Pearce
Hyrum S. Anderson
Seth Neel
Kurt Thomas
Florian Tramèr
SILM
375
268
0
20 Feb 2023
Mithridates: Auditing and Boosting Backdoor Resistance of Machine Learning Pipelines
Conference on Computer and Communications Security (CCS), 2023
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
324
3
0
09 Feb 2023
Algorithmic Collective Action in Machine Learning
International Conference on Machine Learning (ICML), 2023
Moritz Hardt
Eric Mazumdar
Celestine Mendler-Dünner
Tijana Zrnic
188
29
0
08 Feb 2023
Temporal Robustness against Data Poisoning
Neural Information Processing Systems (NeurIPS), 2023
Wenxiao Wang
Soheil Feizi
AAML
OOD
353
15
0
07 Feb 2023
Uncovering Adversarial Risks of Test-Time Adaptation
International Conference on Machine Learning (ICML), 2023
Tong Wu
Feiran Jia
Xiangyu Qi
Jiachen T. Wang
Vikash Sehwag
Saeed Mahloujifar
Prateek Mittal
AAML
TTA
355
11
0
29 Jan 2023
PECAN: A Deterministic Certified Defense Against Backdoor Attacks
Yuhao Zhang
Aws Albarghouthi
Loris Dántoni
AAML
294
4
0
27 Jan 2023
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
IEEE Symposium on Security and Privacy (IEEE S&P), 2023
H. Aghakhani
Wei Dai
Andre Manoel
Xavier Fernandes
Anant Kharkar
Christopher Kruegel
Giovanni Vigna
David Evans
B. Zorn
Robert Sim
SILM
215
60
0
06 Jan 2023
Cramming: Training a Language Model on a Single GPU in One Day
International Conference on Machine Learning (ICML), 2022
Jonas Geiping
Tom Goldstein
MoE
268
102
0
28 Dec 2022
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Neural Information Processing Systems (NeurIPS), 2022
Jimmy Z. Di
Jack Douglas
Jayadev Acharya
Gautam Kamath
Ayush Sekhari
MU
195
58
0
21 Dec 2022
Transformers Go for the LOLs: Generating (Humourous) Titles from Scientific Abstracts End-to-End
Yanran Chen
Steffen Eger
311
23
0
20 Dec 2022
A Review of Speech-centric Trustworthy Machine Learning: Privacy, Safety, and Fairness
APSIPA Transactions on Signal and Information Processing (TASIP), 2022
Tiantian Feng
Rajat Hebbar
Nicholas Mehlman
Xuan Shi
Aditya Kommineni
and Shrikanth Narayanan
260
37
0
18 Dec 2022
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
157
6
0
06 Dec 2022
Rethinking Backdoor Data Poisoning Attacks in the Context of Semi-Supervised Learning
Marissa Connor
Vincent Emanuele
SILM
AAML
170
1
0
05 Dec 2022
Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors
International Conference on Learning Representations (ICLR), 2022
Sizhe Chen
Geng Yuan
Xinwen Cheng
Yifan Gong
Minghai Qin
Yanzhi Wang
Xiaolin Huang
AAML
225
21
0
22 Nov 2022
Not All Poisons are Created Equal: Robust Training against Data Poisoning
International Conference on Machine Learning (ICML), 2022
Yu Yang
Tianwei Liu
Baharan Mirzasoleiman
AAML
129
43
0
18 Oct 2022
Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection
Neural Information Processing Systems (NeurIPS), 2022
Yiming Li
Yang Bai
Yong Jiang
Yong-Liang Yang
Shutao Xia
Bo Li
AAML
411
137
0
27 Sep 2022
Data Isotopes for Data Provenance in DNNs
Proceedings on Privacy Enhancing Technologies (PoPETs), 2022
Emily Wenger
Xiuyu Li
Ben Y. Zhao
Vitaly Shmatikov
194
17
0
29 Aug 2022
SNAP: Efficient Extraction of Private Properties with Poisoning
Harsh Chaudhari
John Abascal
Alina Oprea
Matthew Jagielski
Florian Tramèr
Jonathan R. Ullman
MIACV
222
37
0
25 Aug 2022
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks
Neural Information Processing Systems (NeurIPS), 2022
Tianwei Liu
Yu Yang
Baharan Mirzasoleiman
AAML
274
38
0
14 Aug 2022
Lethal Dose Conjecture on Data Poisoning
Neural Information Processing Systems (NeurIPS), 2022
Wenxiao Wang
Alexander Levine
Soheil Feizi
FedML
183
17
0
05 Aug 2022
Previous
1
2
3
Next