ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.02276
  4. Cited By
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
v1v2 (latest)

Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching

International Conference on Learning Representations (ICLR), 2020
4 September 2020
Jonas Geiping
Liam H. Fowl
Wenjie Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
    AAML
ArXiv (abs)PDFHTML

Papers citing "Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching"

50 / 148 papers shown
Provable Watermarking for Data Poisoning Attacks
Provable Watermarking for Data Poisoning Attacks
Yifan Zhu
Lijia Yu
Xiao-Shan Gao
AAML
139
0
0
10 Oct 2025
PUREVQ-GAN: Defending Data Poisoning Attacks through Vector-Quantized Bottlenecks
PUREVQ-GAN: Defending Data Poisoning Attacks through Vector-Quantized Bottlenecks
Alexander Branch
Omead Brandon Pooladzandi
Radin Khosraviani
Sunay Bhat
Jeffrey Q. Jiang
Gregory Pottie
69
0
0
30 Sep 2025
Coward: Collision-based Watermark for Proactive Federated Backdoor Detection
Coward: Collision-based Watermark for Proactive Federated Backdoor Detection
Wenjie Li
Siying Gu
Yiming Li
Kangjie Chen
Zhili Chen
Tianwei Zhang
Shu-Tao Xia
Dacheng Tao
AAML
146
1
0
04 Aug 2025
Defending Against Beta Poisoning Attacks in Machine Learning Models
Defending Against Beta Poisoning Attacks in Machine Learning ModelsComputer Science Symposium in Russia (CSR), 2025
Nilufer Gulciftci
M. Emre Gursoy
AAML
141
0
0
02 Aug 2025
A Practical and Secure Byzantine Robust Aggregator
A Practical and Secure Byzantine Robust Aggregator
De Zhang Lee
Aashish Kolluri
P. Saxena
Ee-Chien Chang
AAMLFedML
337
2
0
29 Jun 2025
Poison Once, Control Anywhere: Clean-Text Visual Backdoors in VLM-based Mobile Agents
Poison Once, Control Anywhere: Clean-Text Visual Backdoors in VLM-based Mobile Agents
Xuan Wang
Yaning Tan
Zhe Liu
Yi Yu
Yuliang Lu
Xiaochun Cao
Ee-Chien Chang
X. Gao
AAML
464
0
0
16 Jun 2025
Beyond Invisibility: Learning Robust Visible Watermarks for Stronger Copyright Protection
Beyond Invisibility: Learning Robust Visible Watermarks for Stronger Copyright ProtectionConference on Uncertainty in Artificial Intelligence (UAI), 2025
Tianci Liu
Tong Yang
Quan Zhang
Qi Lei
WIGMAAML
301
0
0
03 Jun 2025
Sybil-based Virtual Data Poisoning Attacks in Federated Learning
Sybil-based Virtual Data Poisoning Attacks in Federated Learning
Changxun Zhu
Qilong Wu
Lingjuan Lyu
Shibei Xue
AAMLFedML
328
0
0
15 May 2025
Like Oil and Water: Group Robustness Methods and Poisoning Defenses May Be at Odds
Like Oil and Water: Group Robustness Methods and Poisoning Defenses May Be at OddsInternational Conference on Learning Representations (ICLR), 2025
Michael-Andrei Panaitescu-Liess
Yigitcan Kaya
Sicheng Zhu
Furong Huang
Tudor Dumitras
AAML
268
0
0
02 Apr 2025
Instance-Level Data-Use Auditing of Visual ML Models
Instance-Level Data-Use Auditing of Visual ML Models
Zonghao Huang
Neil Zhenqiang Gong
Michael K. Reiter
MLAU
410
1
0
28 Mar 2025
Data Poisoning in Deep Learning: A Survey
Data Poisoning in Deep Learning: A Survey
Pinlong Zhao
Weiyao Zhu
Pengfei Jiao
Di Gao
Ou Wu
AAML
498
15
0
27 Mar 2025
Clean Image May be Dangerous: Data Poisoning Attacks Against Deep Hashing
Clean Image May be Dangerous: Data Poisoning Attacks Against Deep Hashing
Shuai Li
Jie Zhang
Yuang Qi
Kejiang Chen
Tianwei Zhang
Weinan Zhang
Nenghai Yu
190
0
0
27 Mar 2025
Targeted Data Poisoning for Black-Box Audio Datasets Ownership VerificationIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2025
Wassim Bouaziz
El-Mahdi El-Mhamdi
Nicolas Usunier
283
2
0
13 Mar 2025
Seal Your Backdoor with Variational Defense
Seal Your Backdoor with Variational Defense
Ivan Sabolić
Matej Grcić
Sinisa Segvic
AAML
1.1K
1
0
11 Mar 2025
Towards Autonomous Reinforcement Learning for Real-World Robotic Manipulation with Large Language Models
Towards Autonomous Reinforcement Learning for Real-World Robotic Manipulation with Large Language ModelsIEEE Robotics and Automation Letters (IEEE RA-L), 2025
Niccolò Turcato
Matteo Iovino
Aris Synodinos
Alberto Dalla Libera
R. Carli
Pietro Falco
LM&Ro
475
1
0
06 Mar 2025
Approaching the Harm of Gradient Attacks While Only Flipping Labels
Approaching the Harm of Gradient Attacks While Only Flipping Labels
Abdessamad El-Kabid
El-Mahdi El-Mhamdi
AAML
264
1
0
28 Feb 2025
Neuroplasticity and Corruption in Model Mechanisms: A Case Study Of Indirect Object IdentificationNorth American Chapter of the Association for Computational Linguistics (NAACL), 2025
Vishnu Kabir Chhabra
Ding Zhu
Mohammad Mahdi Khalili
325
5
0
27 Feb 2025
TAPE: Tailored Posterior Difference for Auditing of Machine Unlearning
TAPE: Tailored Posterior Difference for Auditing of Machine UnlearningThe Web Conference (WWW), 2025
Weiqi Wang
Zhiyi Tian
An Liu
Shui Yu
320
3
0
27 Feb 2025
Imitation Game for Adversarial Disillusion with Multimodal Generative Chain-of-Thought Role-Play
Imitation Game for Adversarial Disillusion with Multimodal Generative Chain-of-Thought Role-Play
Ching-Chun Chang
Fan-Yun Chen
Shih-Hong Gu
Kai Gao
Hanrui Wang
Isao Echizen
AAML
1.0K
0
0
31 Jan 2025
VENENA: A Deceptive Visual Encryption Framework for Wireless Semantic Secrecy
VENENA: A Deceptive Visual Encryption Framework for Wireless Semantic Secrecy
Bin Han
Ye Yuan
Hans D. Schotten
200
1
0
18 Jan 2025
Defending Against Neural Network Model Inversion Attacks via Data
  Poisoning
Defending Against Neural Network Model Inversion Attacks via Data PoisoningIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2024
Shuai Zhou
Dayong Ye
Tianqing Zhu
Wanlei Zhou
AAML
214
4
0
10 Dec 2024
Delta-Influence: Unlearning Poisons via Influence Functions
Delta-Influence: Unlearning Poisons via Influence Functions
Wenjie Li
Jiawei Li
Christian Schroeder de Witt
Christian Schroeder de Witt
Amartya Sanyal
Amartya Sanyal
MUTDI
427
9
0
20 Nov 2024
Backdoor in Seconds: Unlocking Vulnerabilities in Large Pre-trained
  Models via Model Editing
Backdoor in Seconds: Unlocking Vulnerabilities in Large Pre-trained Models via Model Editing
Dongliang Guo
Mengxuan Hu
Zihan Guan
Junfeng Guo
Thomas Hartvigsen
Sheng Li
AAML
363
4
0
23 Oct 2024
Fragile Giants: Understanding the Susceptibility of Models to
  Subpopulation Attacks
Fragile Giants: Understanding the Susceptibility of Models to Subpopulation Attacks
Isha Gupta
Hidde Lycklama
Emanuel Opel
Evan Rose
Anwar Hithnawi
AAML
235
1
0
11 Oct 2024
Data Taggants: Dataset Ownership Verification via Harmless Targeted Data Poisoning
Data Taggants: Dataset Ownership Verification via Harmless Targeted Data PoisoningInternational Conference on Learning Representations (ICLR), 2024
Wassim Bouaziz
El-Mahdi El-Mhamdi
Nicolas Usunier
TDIAAML
249
8
0
09 Oct 2024
Psychometrics for Hypnopaedia-Aware Machinery via Chaotic Projection of
  Artificial Mental Imagery
Psychometrics for Hypnopaedia-Aware Machinery via Chaotic Projection of Artificial Mental Imagery
Ching-Chun Chang
Kai Gao
Shuying Xu
Anastasia Kordoni
Christopher Leckie
Isao Echizen
167
0
0
29 Sep 2024
UTrace: Poisoning Forensics for Private Collaborative Learning
UTrace: Poisoning Forensics for Private Collaborative Learning
Evan Rose
Hidde Lycklama
Harsh Chaudhari
Niklas Britz
Anwar Hithnawi
Alina Oprea
465
2
0
23 Sep 2024
Data Poisoning and Leakage Analysis in Federated Learning
Data Poisoning and Leakage Analysis in Federated Learning
Wenqi Wei
Tiansheng Huang
Zachary Yahn
Anoop Singhal
Margaret Loper
Ling Liu
FedMLSILM
224
2
0
19 Sep 2024
Security Concerns in Quantum Machine Learning as a Service
Security Concerns in Quantum Machine Learning as a Service
Satwik Kundu
Swaroop Ghosh
285
5
0
18 Aug 2024
Clean-Label Physical Backdoor Attacks with Data Distillation
Clean-Label Physical Backdoor Attacks with Data Distillation
Thinh Dao
Cuong Chi Le
Khoa D. Doan
AAML
479
2
0
27 Jul 2024
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks
Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks
Lukas Gosch
Mahalakshmi Sabanayagam
Debarghya Ghoshdastidar
Stephan Günnemann
AAML
541
6
0
15 Jul 2024
Distribution Learnability and Robustness
Distribution Learnability and Robustness
Shai Ben-David
Alex Bie
Gautam Kamath
Tosca Lechner
320
4
0
25 Jun 2024
Machine Unlearning Fails to Remove Data Poisoning Attacks
Machine Unlearning Fails to Remove Data Poisoning Attacks
Martin Pawelczyk
Jimmy Z. Di
Yiwei Lu
Gautam Kamath
Ayush Sekhari
Seth Neel
AAMLMU
530
28
0
25 Jun 2024
Really Unlearned? Verifying Machine Unlearning via Influential Sample
  Pairs
Really Unlearned? Verifying Machine Unlearning via Influential Sample Pairs
Heng Xu
Tianqing Zhu
Lefeng Zhang
Wanlei Zhou
MUAAML
241
5
0
16 Jun 2024
RMF: A Risk Measurement Framework for Machine Learning Models
RMF: A Risk Measurement Framework for Machine Learning ModelsARES (ARES), 2024
Jan Schröder
Jakub Breier
132
1
0
15 Jun 2024
Generalization Bound and New Algorithm for Clean-Label Backdoor Attack
Generalization Bound and New Algorithm for Clean-Label Backdoor Attack
Lijia Yu
Shuang Liu
Yibo Miao
Xiao-Shan Gao
Lijun Zhang
AAML
303
10
0
02 Jun 2024
Phantom: General Backdoor Attacks on Retrieval Augmented Language Generation
Phantom: General Backdoor Attacks on Retrieval Augmented Language Generation
Harsh Chaudhari
Giorgio Severi
John Abascal
Matthew Jagielski
Christopher A. Choquette-Choo
Milad Nasr
Cristina Nita-Rotaru
Cristina Nita-Rotaru
Alina Oprea
SILMAAML
373
57
0
30 May 2024
PureEBM: Universal Poison Purification via Mid-Run Dynamics of
  Energy-Based Models
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models
Omead Brandon Pooladzandi
Jeffrey Q. Jiang
Sunay Bhat
Gregory Pottie
AAML
294
0
0
28 May 2024
PureGen: Universal Data Purification for Train-Time Poison Defense via
  Generative Model Dynamics
PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model Dynamics
Sunay Bhat
Jeffrey Q. Jiang
Omead Brandon Pooladzandi
Alexander Branch
Gregory Pottie
AAML
340
4
0
28 May 2024
Hard Work Does Not Always Pay Off: Poisoning Attacks on Neural Architecture Search
Hard Work Does Not Always Pay Off: Poisoning Attacks on Neural Architecture Search
Zachary Coalson
Huazheng Wang
Qingyun Wu
Sanghyun Hong
AAMLOOD
290
0
0
09 May 2024
Generating Potent Poisons and Backdoors from Scratch with Guided
  Diffusion
Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion
Hossein Souri
Arpit Bansal
Hamid Kazemi
Liam H. Fowl
Aniruddha Saha
Jonas Geiping
Andrew Gordon Wilson
Rama Chellappa
Tom Goldstein
Micah Goldblum
SILMDiffM
185
1
0
25 Mar 2024
Have You Poisoned My Data? Defending Neural Networks against Data
  Poisoning
Have You Poisoned My Data? Defending Neural Networks against Data Poisoning
Fabio De Gaspari
Dorjan Hitaj
Luigi V. Mancini
AAMLTDI
171
8
0
20 Mar 2024
Threats, Attacks, and Defenses in Machine Unlearning: A Survey
Threats, Attacks, and Defenses in Machine Unlearning: A Survey
Ziyao Liu
Huanyi Ye
Chen Chen
Yongsen Zheng
K. Lam
AAMLMU
591
49
0
20 Mar 2024
Certified Robustness to Clean-Label Poisoning Using Diffusion Denoising
Certified Robustness to Clean-Label Poisoning Using Diffusion Denoising
Sanghyun Hong
Nicholas Carlini
Alexey Kurakin
DiffM
282
3
0
18 Mar 2024
Teach LLMs to Phish: Stealing Private Information from Language Models
Teach LLMs to Phish: Stealing Private Information from Language Models
Ashwinee Panda
Christopher A. Choquette-Choo
Zhengming Zhang
Yaoqing Yang
Prateek Mittal
PILM
282
36
0
01 Mar 2024
Auditing Private Prediction
Auditing Private Prediction
Karan Chadha
Matthew Jagielski
Nicolas Papernot
Christopher A. Choquette-Choo
Milad Nasr
280
9
0
14 Feb 2024
Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language
  Models
Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language ModelsNeural Information Processing Systems (NeurIPS), 2024
Yuancheng Xu
Jiarui Yao
Manli Shu
Yanchao Sun
Zichu Wu
Ning Yu
Tom Goldstein
Furong Huang
AAML
305
37
0
05 Feb 2024
Data Poisoning for In-context Learning
Data Poisoning for In-context Learning
Pengfei He
Han Xu
Yue Xing
Hui Liu
Makoto Yamada
Shucheng Zhou
SILMAAML
384
23
0
03 Feb 2024
Preference Poisoning Attacks on Reward Model Learning
Preference Poisoning Attacks on Reward Model Learning
Junlin Wu
Zhenghao Hu
Chaowei Xiao
Chenguang Wang
Ning Zhang
Yevgeniy Vorobeychik
AAML
275
11
0
02 Feb 2024
Manipulating Predictions over Discrete Inputs in Machine Teaching
Manipulating Predictions over Discrete Inputs in Machine Teaching
Xiaodong Wu
Yufei Han
H. Dahrouj
Jianbing Ni
Zhenwen Liang
Xiangliang Zhang
205
0
0
31 Jan 2024
123
Next