Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2210.09671
Cited By
Not All Poisons are Created Equal: Robust Training against Data Poisoning
18 October 2022
Yu Yang
Tianwei Liu
Baharan Mirzasoleiman
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Not All Poisons are Created Equal: Robust Training against Data Poisoning"
26 / 26 papers shown
Title
Like Oil and Water: Group Robustness Methods and Poisoning Defenses May Be at Odds
Michael-Andrei Panaitescu-Liess
Yigitcan Kaya
Sicheng Zhu
Furong Huang
Tudor Dumitras
AAML
124
0
0
02 Apr 2025
PoisonedParrot: Subtle Data Poisoning Attacks to Elicit Copyright-Infringing Content from Large Language Models
Michael-Andrei Panaitescu-Liess
Pankayaraj Pathmanathan
Yigitcan Kaya
Zora Che
Bang An
Sicheng Zhu
Aakriti Agrawal
Furong Huang
AAML
118
2
0
10 Mar 2025
Mitigating Unauthorized Speech Synthesis for Voice Protection
Zhisheng Zhang
Qianyi Yang
Derui Wang
Pengyang Huang
Yuxin Cao
Kai Ye
Jie Hao
AAML
59
3
0
28 Oct 2024
AdvBDGen: Adversarially Fortified Prompt-Specific Fuzzy Backdoor Generator Against LLM Alignment
Pankayaraj Pathmanathan
Udari Madhushani Sehwag
Michael-Andrei Panaitescu-Liess
Furong Huang
SILM
AAML
116
0
0
15 Oct 2024
Clean Label Attacks against SLU Systems
Lin Zhang
Sonal Joshi
Thomas Thebaud
Jesus Villalba
Najim Dehak
Sanjeev Khudanpur
AAML
61
0
0
13 Sep 2024
Vera Verto: Multimodal Hijacking Attack
Minxing Zhang
Wenhao Yang
H. Bidkhori
Yang Zhang
AAML
43
0
0
31 Jul 2024
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models
Omead Brandon Pooladzandi
Jeffrey Q. Jiang
Sunay Bhat
Gregory Pottie
AAML
53
0
0
28 May 2024
PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model Dynamics
Sunay Bhat
Jeffrey Q. Jiang
Omead Brandon Pooladzandi
Alexander Branch
Gregory Pottie
AAML
91
2
0
28 May 2024
FCert: Certifiably Robust Few-Shot Classification in the Era of Foundation Models
Yanting Wang
Wei Zou
Jinyuan Jia
90
1
0
12 Apr 2024
Have You Poisoned My Data? Defending Neural Networks against Data Poisoning
Fabio De Gaspari
Dorjan Hitaj
Luigi V. Mancini
AAML
TDI
81
4
0
20 Mar 2024
Robust Influence-based Training Methods for Noisy Brain MRI
Minh-Hao Van
Alycia N. Carey
Xintao Wu
OOD
NoLa
32
1
0
15 Mar 2024
SmallToLarge (S2L): Scalable Data Selection for Fine-tuning Large Language Models by Summarizing Training Trajectories of Small Models
Yu Yang
Siddhartha Mishra
Jeffrey N Chiang
Baharan Mirzasoleiman
98
24
0
12 Mar 2024
Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models
Yuancheng Xu
Jiarui Yao
Manli Shu
Yanchao Sun
Zichu Wu
Ning Yu
Tom Goldstein
Furong Huang
AAML
125
25
0
05 Feb 2024
A Comprehensive Survey of Attack Techniques, Implementation, and Mitigation Strategies in Large Language Models
Aysan Esmradi
Daniel Wankit Yip
C. Chan
AAML
83
14
0
18 Dec 2023
Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey
Yichen Wan
Youyang Qu
Wei Ni
Yong Xiang
Longxiang Gao
Ekram Hossain
AAML
101
42
0
14 Dec 2023
BELT: Old-School Backdoor Attacks can Evade the State-of-the-Art Defense with Backdoor Exclusivity Lifting
Huming Qiu
Junjie Sun
Mi Zhang
Xudong Pan
Min Yang
AAML
116
4
0
08 Dec 2023
Mendata: A Framework to Purify Manipulated Training Data
Zonghao Huang
Neil Zhenqiang Gong
Michael K. Reiter
59
0
0
03 Dec 2023
Effective Backdoor Mitigation in Vision-Language Models Depends on the Pre-training Objective
Sahil Verma
Gantavya Bhatt
Avi Schwarzschild
Soumye Singhal
Arnav M. Das
Chirag Shah
John P Dickerson
Jeff Bilmes
J. Bilmes
AAML
96
1
0
25 Nov 2023
HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning Attacks
Minh-Hao Van
Alycia N. Carey
Xintao Wu
TDI
AAML
71
3
0
15 Sep 2023
Rethinking Backdoor Attacks
Alaa Khaddaj
Guillaume Leclerc
Aleksandar Makelov
Kristian Georgiev
Hadi Salman
Andrew Ilyas
Aleksander Madry
SILM
76
29
0
19 Jul 2023
A Proxy Attack-Free Strategy for Practically Improving the Poisoning Efficiency in Backdoor Attacks
Ziqiang Li
Hong Sun
Pengfei Xia
Beihao Xia
Xue Rui
Wei Zhang
Qinglang Guo
Bin Li
AAML
134
8
0
14 Jun 2023
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning
Hritik Bansal
Nishad Singhi
Yu Yang
Fan Yin
Aditya Grover
Kai-Wei Chang
AAML
109
46
0
06 Mar 2023
Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks
Jimmy Z. Di
Jack Douglas
Jayadev Acharya
Gautam Kamath
Ayush Sekhari
MU
81
47
0
21 Dec 2022
Differentially Private Optimizers Can Learn Adversarially Robust Models
Yuan Zhang
Zhiqi Bu
83
3
0
16 Nov 2022
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks
Tianwei Liu
Yu Yang
Baharan Mirzasoleiman
AAML
87
27
0
14 Aug 2022
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Sebastiano Vascon
Werner Zellinger
Bernhard A. Moser
Alina Oprea
Battista Biggio
Marcello Pelillo
Fabio Roli
AAML
93
127
0
04 May 2022
1