ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.02276
  4. Cited By
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
v1v2 (latest)

Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching

International Conference on Learning Representations (ICLR), 2020
4 September 2020
Jonas Geiping
Liam H. Fowl
Wenjie Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
    AAML
ArXiv (abs)PDFHTML

Papers citing "Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching"

50 / 148 papers shown
Online Algorithmic Recourse by Collective Action
Online Algorithmic Recourse by Collective Action
Elliot Creager
Richard Zemel
178
5
0
29 Dec 2023
UltraClean: A Simple Framework to Train Robust Neural Networks against Backdoor Attacks
UltraClean: A Simple Framework to Train Robust Neural Networks against Backdoor Attacks
Bingyin Zhao
Yingjie Lao
AAML
299
2
0
17 Dec 2023
Mendata: A Framework to Purify Manipulated Training Data
Mendata: A Framework to Purify Manipulated Training Data
Zonghao Huang
Neil Zhenqiang Gong
Michael K. Reiter
246
0
0
03 Dec 2023
Stable Unlearnable Example: Enhancing the Robustness of Unlearnable
  Examples via Stable Error-Minimizing Noise
Stable Unlearnable Example: Enhancing the Robustness of Unlearnable Examples via Stable Error-Minimizing NoiseAAAI Conference on Artificial Intelligence (AAAI), 2023
Yixin Liu
Kaidi Xu
Xun Chen
Lichao Sun
240
15
0
22 Nov 2023
BrainWash: A Poisoning Attack to Forget in Continual Learning
BrainWash: A Poisoning Attack to Forget in Continual LearningComputer Vision and Pattern Recognition (CVPR), 2023
Ali Abbasi
Parsa Nooralinejad
Hamed Pirsiavash
Soheil Kolouri
CLLKELMAAML
375
7
0
20 Nov 2023
Understanding Variation in Subpopulation Susceptibility to Poisoning
  Attacks
Understanding Variation in Subpopulation Susceptibility to Poisoning Attacks
Evan Rose
Fnu Suya
David Evans
AAML
106
1
0
20 Nov 2023
PACOL: Poisoning Attacks Against Continual Learners
PACOL: Poisoning Attacks Against Continual Learners
Huayu Li
G. Ditzler
AAML
159
3
0
18 Nov 2023
Towards more Practical Threat Models in Artificial Intelligence Security
Towards more Practical Threat Models in Artificial Intelligence Security
Kathrin Grosse
L. Bieringer
Tarek R. Besold
Alexandre Alahi
388
21
0
16 Nov 2023
Label Poisoning is All You Need
Label Poisoning is All You NeedNeural Information Processing Systems (NeurIPS), 2023
Rishi Jha
J. Hayase
Sewoong Oh
AAML
259
43
0
29 Oct 2023
AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image
  Detectors
AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors
You-Ming Chang
Chen Yeh
Wei-Chen Chiu
Ning Yu
VPVLMVLM
405
51
0
26 Oct 2023
Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm
Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool AlgorithmScientific Reports (Sci Rep), 2023
S. M. Fazle
J. Mondal
Meem Arafat Manab
Xi Xiao
Sarfaraz Newaz
AAML
449
2
0
18 Oct 2023
Prompt Backdoors in Visual Prompt Learning
Prompt Backdoors in Visual Prompt Learning
Hai Huang
Subrat Kishore Dutta
Michael Backes
Yun Shen
Yang Zhang
VLMVPVLMAAMLSILM
195
3
0
11 Oct 2023
Domain Watermark: Effective and Harmless Dataset Copyright Protection is
  Closed at Hand
Domain Watermark: Effective and Harmless Dataset Copyright Protection is Closed at HandNeural Information Processing Systems (NeurIPS), 2023
Junfeng Guo
Yiming Li
Lixu Wang
Shu-Tao Xia
Heng-Chiao Huang
Cong Liu
Boheng Li
329
83
0
09 Oct 2023
Towards Poisoning Fair Representations
Towards Poisoning Fair RepresentationsInternational Conference on Learning Representations (ICLR), 2023
Tianci Liu
Haoyu Wang
Feijie Wu
Hengtong Zhang
Pan Li
Lu Su
Jing Gao
AAML
193
3
0
28 Sep 2023
Seeing Is Not Always Believing: Invisible Collision Attack and Defence
  on Pre-Trained Models
Seeing Is Not Always Believing: Invisible Collision Attack and Defence on Pre-Trained Models
Minghan Deng
Zhong Zhang
Junming Shao
AAML
173
0
0
24 Sep 2023
HINT: Healthy Influential-Noise based Training to Defend against Data
  Poisoning Attacks
HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning AttacksIndustrial Conference on Data Mining (IDM), 2023
Minh-Hao Van
Alycia N. Carey
Xintao Wu
TDIAAML
216
3
0
15 Sep 2023
Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning
  System
Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning System
Peixin Zhang
Jun Sun
Mingtian Tan
Xinyu Wang
AAML
297
8
0
12 Sep 2023
Dropout Attacks
Dropout AttacksIEEE Symposium on Security and Privacy (IEEE S&P), 2023
Andrew Yuan
Alina Oprea
Cheng Tan
216
2
0
04 Sep 2023
Adversarial Deep Reinforcement Learning for Cyber Security in Software
  Defined Networks
Adversarial Deep Reinforcement Learning for Cyber Security in Software Defined Networks
Luke Borchjes
Clement N. Nyirenda
L. Leenen
AAML
169
2
0
09 Aug 2023
What Distributions are Robust to Indiscriminate Poisoning Attacks for
  Linear Learners?
What Distributions are Robust to Indiscriminate Poisoning Attacks for Linear Learners?Neural Information Processing Systems (NeurIPS), 2023
Fnu Suya
X. Zhang
Yuan Tian
David Evans
OODAAML
251
3
0
03 Jul 2023
On Practical Aspects of Aggregation Defenses against Data Poisoning
  Attacks
On Practical Aspects of Aggregation Defenses against Data Poisoning Attacks
Wenxiao Wang
Soheil Feizi
AAML
194
1
0
28 Jun 2023
On the Exploitability of Instruction Tuning
On the Exploitability of Instruction TuningNeural Information Processing Systems (NeurIPS), 2023
Manli Shu
Zhenghao Hu
Chen Zhu
Jonas Geiping
Chaowei Xiao
Tom Goldstein
SILM
377
129
0
28 Jun 2023
Hyperparameter Learning under Data Poisoning: Analysis of the Influence
  of Regularization via Multiobjective Bilevel Optimization
Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel OptimizationIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2023
Javier Carnerero-Cano
Luis Muñoz-González
P. Spencer
Emil C. Lupu
AAML
227
6
0
02 Jun 2023
Differentially-Private Decision Trees and Provable Robustness to Data
  Poisoning
Differentially-Private Decision Trees and Provable Robustness to Data Poisoning
D. Vos
Jelle Vos
Tianyu Li
Z. Erkin
S. Verwer
FedML
211
2
0
24 May 2023
Sharpness-Aware Data Poisoning Attack
Sharpness-Aware Data Poisoning AttackInternational Conference on Learning Representations (ICLR), 2023
Pengfei He
Han Xu
Jie Ren
Yingqian Cui
Hui Liu
Charu C. Aggarwal
Shucheng Zhou
AAML
413
8
0
24 May 2023
Assessing Vulnerabilities of Adversarial Learning Algorithm through
  Poisoning Attacks
Assessing Vulnerabilities of Adversarial Learning Algorithm through Poisoning Attacks
Jingfeng Zhang
Bo Song
Bo Han
Lei Liu
Gang Niu
Masashi Sugiyama
AAML
160
2
0
30 Apr 2023
BadVFL: Backdoor Attacks in Vertical Federated Learning
BadVFL: Backdoor Attacks in Vertical Federated LearningIEEE Symposium on Security and Privacy (IEEE S&P), 2023
Mohammad Naseri
Yufei Han
Emiliano De Cristofaro
FedMLAAML
229
22
0
18 Apr 2023
Robust Contrastive Language-Image Pre-training against Data Poisoning
  and Backdoor Attacks
Robust Contrastive Language-Image Pre-training against Data Poisoning and Backdoor AttacksNeural Information Processing Systems (NeurIPS), 2023
Wenhan Yang
Jingdong Gao
Baharan Mirzasoleiman
VLM
344
28
0
13 Mar 2023
Adversarial Sampling for Fairness Testing in Deep Neural Network
Adversarial Sampling for Fairness Testing in Deep Neural NetworkInternational Journal of Advanced Computer Science and Applications (IJACSA), 2023
Tosin Ige
William Marfo
Justin Tonkinson
Sikiru Adewale
Bolanle Hafiz Matti
OOD
127
11
0
06 Mar 2023
Randomized Kaczmarz in Adversarial Distributed Setting
Randomized Kaczmarz in Adversarial Distributed SettingSIAM Journal on Scientific Computing (SISC), 2023
Longxiu Huang
Xia Li
Deanna Needell
231
4
0
24 Feb 2023
Poisoning Web-Scale Training Datasets is Practical
Poisoning Web-Scale Training Datasets is PracticalIEEE Symposium on Security and Privacy (IEEE S&P), 2023
Nicholas Carlini
Matthew Jagielski
Christopher A. Choquette-Choo
Daniel Paleka
Will Pearce
Hyrum S. Anderson
Seth Neel
Kurt Thomas
Florian Tramèr
SILM
375
268
0
20 Feb 2023
Mithridates: Auditing and Boosting Backdoor Resistance of Machine
  Learning Pipelines
Mithridates: Auditing and Boosting Backdoor Resistance of Machine Learning PipelinesConference on Computer and Communications Security (CCS), 2023
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
324
3
0
09 Feb 2023
Algorithmic Collective Action in Machine Learning
Algorithmic Collective Action in Machine LearningInternational Conference on Machine Learning (ICML), 2023
Moritz Hardt
Eric Mazumdar
Celestine Mendler-Dünner
Tijana Zrnic
188
29
0
08 Feb 2023
Temporal Robustness against Data Poisoning
Temporal Robustness against Data PoisoningNeural Information Processing Systems (NeurIPS), 2023
Wenxiao Wang
Soheil Feizi
AAMLOOD
353
15
0
07 Feb 2023
Uncovering Adversarial Risks of Test-Time Adaptation
Uncovering Adversarial Risks of Test-Time AdaptationInternational Conference on Machine Learning (ICML), 2023
Tong Wu
Feiran Jia
Xiangyu Qi
Jiachen T. Wang
Vikash Sehwag
Saeed Mahloujifar
Prateek Mittal
AAMLTTA
355
11
0
29 Jan 2023
PECAN: A Deterministic Certified Defense Against Backdoor Attacks
PECAN: A Deterministic Certified Defense Against Backdoor Attacks
Yuhao Zhang
Aws Albarghouthi
Loris Dántoni
AAML
294
4
0
27 Jan 2023
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
TrojanPuzzle: Covertly Poisoning Code-Suggestion ModelsIEEE Symposium on Security and Privacy (IEEE S&P), 2023
H. Aghakhani
Wei Dai
Andre Manoel
Xavier Fernandes
Anant Kharkar
Christopher Kruegel
Giovanni Vigna
David Evans
B. Zorn
Robert Sim
SILM
215
60
0
06 Jan 2023
Cramming: Training a Language Model on a Single GPU in One Day
Cramming: Training a Language Model on a Single GPU in One DayInternational Conference on Machine Learning (ICML), 2022
Jonas Geiping
Tom Goldstein
MoE
268
102
0
28 Dec 2022
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning AttacksNeural Information Processing Systems (NeurIPS), 2022
Jimmy Z. Di
Jack Douglas
Jayadev Acharya
Gautam Kamath
Ayush Sekhari
MU
195
58
0
21 Dec 2022
Transformers Go for the LOLs: Generating (Humourous) Titles from
  Scientific Abstracts End-to-End
Transformers Go for the LOLs: Generating (Humourous) Titles from Scientific Abstracts End-to-End
Yanran Chen
Steffen Eger
311
23
0
20 Dec 2022
A Review of Speech-centric Trustworthy Machine Learning: Privacy,
  Safety, and Fairness
A Review of Speech-centric Trustworthy Machine Learning: Privacy, Safety, and FairnessAPSIPA Transactions on Signal and Information Processing (TASIP), 2022
Tiantian Feng
Rajat Hebbar
Nicholas Mehlman
Xuan Shi
Aditya Kommineni
and Shrikanth Narayanan
260
37
0
18 Dec 2022
Pre-trained Encoders in Self-Supervised Learning Improve Secure and
  Privacy-preserving Supervised Learning
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
157
6
0
06 Dec 2022
Rethinking Backdoor Data Poisoning Attacks in the Context of
  Semi-Supervised Learning
Rethinking Backdoor Data Poisoning Attacks in the Context of Semi-Supervised Learning
Marissa Connor
Vincent Emanuele
SILMAAML
170
1
0
05 Dec 2022
Self-Ensemble Protection: Training Checkpoints Are Good Data Protectors
Self-Ensemble Protection: Training Checkpoints Are Good Data ProtectorsInternational Conference on Learning Representations (ICLR), 2022
Sizhe Chen
Geng Yuan
Xinwen Cheng
Yifan Gong
Minghai Qin
Yanzhi Wang
Xiaolin Huang
AAML
225
21
0
22 Nov 2022
Not All Poisons are Created Equal: Robust Training against Data
  Poisoning
Not All Poisons are Created Equal: Robust Training against Data PoisoningInternational Conference on Machine Learning (ICML), 2022
Yu Yang
Tianwei Liu
Baharan Mirzasoleiman
AAML
129
43
0
18 Oct 2022
Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset
  Copyright Protection
Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright ProtectionNeural Information Processing Systems (NeurIPS), 2022
Yiming Li
Yang Bai
Yong Jiang
Yong-Liang Yang
Shutao Xia
Bo Li
AAML
411
137
0
27 Sep 2022
Data Isotopes for Data Provenance in DNNs
Data Isotopes for Data Provenance in DNNsProceedings on Privacy Enhancing Technologies (PoPETs), 2022
Emily Wenger
Xiuyu Li
Ben Y. Zhao
Vitaly Shmatikov
194
17
0
29 Aug 2022
SNAP: Efficient Extraction of Private Properties with Poisoning
SNAP: Efficient Extraction of Private Properties with Poisoning
Harsh Chaudhari
John Abascal
Alina Oprea
Matthew Jagielski
Florian Tramèr
Jonathan R. Ullman
MIACV
222
37
0
25 Aug 2022
Friendly Noise against Adversarial Noise: A Powerful Defense against
  Data Poisoning Attacks
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning AttacksNeural Information Processing Systems (NeurIPS), 2022
Tianwei Liu
Yu Yang
Baharan Mirzasoleiman
AAML
274
38
0
14 Aug 2022
Lethal Dose Conjecture on Data Poisoning
Lethal Dose Conjecture on Data PoisoningNeural Information Processing Systems (NeurIPS), 2022
Wenxiao Wang
Alexander Levine
Soheil Feizi
FedML
183
17
0
05 Aug 2022
Previous
123
Next