Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2103.02079
Cited By
v1
v2 (latest)
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations
2 March 2021
Eitan Borgnia
Jonas Geiping
Valeriia Cherepanova
Liam H. Fowl
Arjun Gupta
Amin Ghiasi
Furong Huang
Micah Goldblum
Tom Goldstein
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations"
33 / 33 papers shown
FML-bench: Benchmarking Machine Learning Agents for Scientific Research
Qiran Zou
Hou Hei Lam
Wenhao Zhao
Yiming Tang
Tingting Chen
S. Yu
Tianyi Zhang
Chang Liu
X. Ji
Dianbo Liu
LLMAG
199
1
0
12 Oct 2025
Mitigating Unauthorized Speech Synthesis for Voice Protection
Zhisheng Zhang
Qianyi Yang
Derui Wang
Pengyang Huang
Yuxin Cao
Kai Ye
Jie Hao
AAML
201
14
0
28 Oct 2024
Data Taggants: Dataset Ownership Verification via Harmless Targeted Data Poisoning
International Conference on Learning Representations (ICLR), 2024
Wassim Bouaziz
El-Mahdi El-Mhamdi
Nicolas Usunier
TDI
AAML
338
9
0
09 Oct 2024
Balancing Label Imbalance in Federated Environments Using Only Mixup and Artificially-Labeled Noise
International Conferences on Pattern Recognition and Artificial Intelligence (ICCPRAI), 2024
Kyle Rui Sang
Tahseen Rabbani
Furong Huang
FedML
255
1
0
20 Sep 2024
Protecting against simultaneous data poisoning attacks
International Conference on Learning Representations (ICLR), 2024
Neel Alex
Shoaib Ahmed Siddiqui
Amartya Sanyal
David M. Krueger
AAML
408
4
0
23 Aug 2024
Privacy-Preserving Split Learning with Vision Transformers using Patch-Wise Random and Noisy CutMix
Yang Jin
Sihun Baek
Lei Zhang
Hyelin Nam
Praneeth Vepakomma
Ramesh Raskar
Mehdi Bennis
Seong-Lyun Kim
353
6
0
02 Aug 2024
Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion
Hossein Souri
Arpit Bansal
Hamid Kazemi
Liam H. Fowl
Aniruddha Saha
Jonas Geiping
Andrew Gordon Wilson
Rama Chellappa
Tom Goldstein
Micah Goldblum
SILM
DiffM
237
1
0
25 Mar 2024
Certified Robustness to Clean-Label Poisoning Using Diffusion Denoising
Sanghyun Hong
Nicholas Carlini
Alexey Kurakin
DiffM
371
3
0
18 Mar 2024
Backdoor Secrets Unveiled: Identifying Backdoor Data with Optimized Scaled Prediction Consistency
Soumyadeep Pal
Yuguang Yao
Ren Wang
Bingquan Shen
Sijia Liu
AAML
284
15
0
15 Mar 2024
BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models
International Conference on Learning Representations (ICLR), 2024
Zhen Xiang
Fengqing Jiang
Zidi Xiong
Bhaskar Ramasubramanian
Radha Poovendran
Bo Li
LRM
SILM
359
90
0
20 Jan 2024
Does Differential Privacy Prevent Backdoor Attacks in Practice?
Database Security (DBSec), 2023
Fereshteh Razmi
Jian Lou
Li Xiong
AAML
118
1
0
10 Nov 2023
DP-Mix: Mixup-based Data Augmentation for Differentially Private Learning
Neural Information Processing Systems (NeurIPS), 2023
Wenxuan Bao
Francesco Pittaluga
Vijay Kumar
Vincent Bindschaedler
292
15
0
02 Nov 2023
CBD: A Certified Backdoor Detector Based on Local Dominant Probability
Neural Information Processing Systems (NeurIPS), 2023
Zhen Xiang
Zidi Xiong
Bo Li
AAML
407
27
0
26 Oct 2023
HINT: Healthy Influential-Noise based Training to Defend against Data Poisoning Attacks
Industrial Conference on Data Mining (IDM), 2023
Minh-Hao Van
Alycia N. Carey
Xintao Wu
TDI
AAML
317
3
0
15 Sep 2023
Differentially-Private Decision Trees and Provable Robustness to Data Poisoning
D. Vos
Jelle Vos
Tianyu Li
Z. Erkin
S. Verwer
FedML
303
4
0
24 May 2023
Pick your Poison: Undetectability versus Robustness in Data Poisoning Attacks
Nils Lukas
Florian Kerschbaum
340
1
0
07 May 2023
Does Federated Learning Really Need Backpropagation?
European Conference on Computer Vision (ECCV), 2023
Hao Feng
Tianyu Pang
Chao Du
Wei Chen
Shuicheng Yan
Min Lin
FedML
347
14
0
28 Jan 2023
Silent Killer: A Stealthy, Clean-Label, Black-Box Backdoor Attack
Tzvi Lederer
Gallil Maimon
Lior Rokach
AAML
174
2
0
05 Jan 2023
Differentially Private CutMix for Split Learning with Vision Transformer
Seungeun Oh
Jihong Park
Sihun Baek
Hyelin Nam
Praneeth Vepakomma
Ramesh Raskar
M. Bennis
Seong-Lyun Kim
FedML
319
22
0
28 Oct 2022
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks
Neural Information Processing Systems (NeurIPS), 2022
Tianwei Liu
Yu Yang
Baharan Mirzasoleiman
AAML
386
39
0
14 Aug 2022
Privacy Safe Representation Learning via Frequency Filtering Encoder
J. Jeong
Minyong Cho
Philipp Benz
Jinwoo Hwang
J. Kim
Seungkwang Lee
Tae-Hoon Kim
160
5
0
04 Aug 2022
Fine-grained Poisoning Attack to Local Differential Privacy Protocols for Mean and Variance Estimation
USENIX Security Symposium (USENIX Security), 2022
Xiaoguang Li
Ninghui Li
Wenhai Sun
Neil Zhenqiang Gong
Hui Li
AAML
505
34
0
24 May 2022
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
ACM Computing Surveys (ACM CSUR), 2022
Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Sebastiano Vascon
Werner Zellinger
Bernhard A. Moser
Alina Oprea
Battista Biggio
Marcello Pelillo
Fabio Roli
AAML
478
188
0
04 May 2022
Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning
International Conference on Learning Representations (ICLR), 2022
Hao He
Kaiwen Zha
Dina Katabi
AAML
451
44
0
22 Feb 2022
On the Effectiveness of Adversarial Training against Backdoor Attacks
IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2022
Yinghua Gao
Dongxian Wu
Jingfeng Zhang
Guanhao Gan
Shutao Xia
Gang Niu
Masashi Sugiyama
AAML
226
32
0
22 Feb 2022
Resurrecting Trust in Facial Recognition: Mitigating Backdoor Attacks in Face Recognition to Prevent Potential Privacy Breaches
Reena Zelenkova
J. Swallow
Pathum Chamikara Mahawaga Arachchige
Dongxi Liu
Mohan Baruwal Chhetri
S. Çamtepe
M. Grobler
Mahathir Almashor
AAML
175
2
0
18 Feb 2022
Adversarial Examples Make Strong Poisons
Neural Information Processing Systems (NeurIPS), 2021
Liam H. Fowl
Micah Goldblum
Ping Yeh-Chiang
Jonas Geiping
Wojtek Czaja
Tom Goldstein
SILM
371
162
0
21 Jun 2021
Accumulative Poisoning Attacks on Real-time Data
Neural Information Processing Systems (NeurIPS), 2021
Tianyu Pang
Xiao Yang
Yinpeng Dong
Hang Su
Jun Zhu
266
22
0
18 Jun 2021
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Hossein Souri
Liam H. Fowl
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
SILM
522
162
0
16 Jun 2021
Survey: Image Mixing and Deleting for Data Augmentation
Engineering applications of artificial intelligence (EAAI), 2021
Humza Naveed
Saeed Anwar
Munawar Hayat
Kashif Javed
Ajmal Mian
435
115
0
13 Jun 2021
AirMixML: Over-the-Air Data Mixup for Inherently Privacy-Preserving Edge Machine Learning
Global Communications Conference (GLOBECOM), 2021
Yusuke Koda
Jihong Park
M. Bennis
Praneeth Vepakomma
Ramesh Raskar
233
10
0
02 May 2021
What Doesn't Kill You Makes You Robust(er): How to Adversarially Train against Data Poisoning
Jonas Geiping
Liam H. Fowl
Gowthami Somepalli
Micah Goldblum
Michael Moeller
Tom Goldstein
TDI
AAML
SILM
238
47
0
26 Feb 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
Basel Alomair
Aleksander Madry
Yue Liu
Tom Goldstein
SILM
609
382
0
18 Dec 2020
1
Page 1 of 1