Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1903.09860
Cited By
Data Poisoning against Differentially-Private Learners: Attacks and Defenses
23 March 2019
Yuzhe Ma
Xiaojin Zhu
Justin Hsu
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Data Poisoning against Differentially-Private Learners: Attacks and Defenses"
25 / 25 papers shown
Title
Crowding Out The Noise: Algorithmic Collective Action Under Differential Privacy
Rushabh Solanki
Meghana Bhange
Ulrich Aïvodji
Elliot Creager
29
0
0
09 May 2025
Game-Theoretic Defenses for Robust Conformal Prediction Against Adversarial Attacks in Medical Imaging
Rui Luo
Jie Bao
Zhixin Zhou
Chuangyin Dang
MedIm
AAML
37
5
0
07 Nov 2024
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models
Omead Brandon Pooladzandi
Jeffrey Q. Jiang
Sunay Bhat
Gregory Pottie
AAML
31
0
0
28 May 2024
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Wenqi Wei
Ling Liu
25
16
0
02 Feb 2024
A Survey on Vulnerability of Federated Learning: A Learning Algorithm Perspective
Xianghua Xie
Chen Hu
Hanchi Ren
Jingjing Deng
FedML
AAML
29
19
0
27 Nov 2023
PrivImage: Differentially Private Synthetic Image Generation using Diffusion Models with Semantic-Aware Pretraining
Kecen Li
Chen Gong
Zhixiang Li
Yuzhong Zhao
Xinwen Hou
Tianhao Wang
27
10
0
19 Oct 2023
Enhancing the Antidote: Improved Pointwise Certifications against Poisoning Attacks
Shijie Liu
Andrew C. Cullen
Paul Montague
S. Erfani
Benjamin I. P. Rubinstein
AAML
20
3
0
15 Aug 2023
Universal Soldier: Using Universal Adversarial Perturbations for Detecting Backdoor Attacks
Xiaoyun Xu
Oguzhan Ersoy
S. Picek
AAML
21
2
0
01 Feb 2023
Backdoor Cleansing with Unlabeled Data
Lu Pang
Tao Sun
Haibin Ling
Chao Chen
AAML
37
18
0
22 Nov 2022
A General Framework for Auditing Differentially Private Machine Learning
Fred Lu
Joseph Munoz
Maya Fuchs
Tyler LeBlond
Elliott Zaresky-Williams
Edward Raff
Francis Ferraro
Brian Testa
FedML
16
35
0
16 Oct 2022
On the Robustness of Random Forest Against Untargeted Data Poisoning: An Ensemble-Based Approach
M. Anisetti
C. Ardagna
Alessandro Balestrucci
Nicola Bena
Ernesto Damiani
C. Yeun
AAML
OOD
24
10
0
28 Sep 2022
Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks
Chulin Xie
Yunhui Long
Pin-Yu Chen
Qinbin Li
Arash Nourian
Sanmi Koyejo
Bo Li
FedML
35
13
0
08 Sep 2022
SNAP: Efficient Extraction of Private Properties with Poisoning
Harsh Chaudhari
John Abascal
Alina Oprea
Matthew Jagielski
Florian Tramèr
Jonathan R. Ullman
MIACV
34
30
0
25 Aug 2022
Federated and Transfer Learning: A Survey on Adversaries and Defense Mechanisms
Ehsan Hallaji
R. Razavi-Far
M. Saif
AAML
FedML
21
13
0
05 Jul 2022
Fine-grained Poisoning Attack to Local Differential Privacy Protocols for Mean and Variance Estimation
Xiaoguang Li
Ninghui Li
Wenhai Sun
Neil Zhenqiang Gong
Hui Li
AAML
56
15
0
24 May 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
25
34
0
13 May 2022
ScaleSFL: A Sharding Solution for Blockchain-Based Federated Learning
Evan W. R. Madill
Ben Nguyen
C. Leung
Sara Rouhani
30
20
0
04 Apr 2022
Combining Differential Privacy and Byzantine Resilience in Distributed SGD
R. Guerraoui
Nirupam Gupta
Rafael Pinot
Sébastien Rouault
John Stephan
FedML
35
4
0
08 Oct 2021
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
32
16
0
20 Sep 2021
Privacy-Preserving Machine Learning: Methods, Challenges and Directions
Runhua Xu
Nathalie Baracaldo
J. Joshi
24
100
0
10 Aug 2021
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers
Xi Li
David J. Miller
Zhen Xiang
G. Kesidis
AAML
14
0
0
28 May 2021
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release
Liam H. Fowl
Ping Yeh-Chiang
Micah Goldblum
Jonas Geiping
Arpit Bansal
W. Czaja
Tom Goldstein
13
43
0
16 Feb 2021
Data Poisoning Attacks to Deep Learning Based Recommender Systems
Hai Huang
Jiaming Mu
Neil Zhenqiang Gong
Qi Li
Bin Liu
Mingwei Xu
AAML
17
129
0
07 Jan 2021
Mitigating Sybil Attacks on Differential Privacy based Federated Learning
Yupeng Jiang
Yong Li
Yipeng Zhou
Xi Zheng
FedML
AAML
21
15
0
20 Oct 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
W. R. Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
19
215
0
04 Sep 2020
1