ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.12101
  4. Cited By
Differential Privacy Has Disparate Impact on Model Accuracy

Differential Privacy Has Disparate Impact on Model Accuracy

28 May 2019
Eugene Bagdasaryan
Vitaly Shmatikov
ArXivPDFHTML

Papers citing "Differential Privacy Has Disparate Impact on Model Accuracy"

50 / 109 papers shown
Title
Modelling the long-term fairness dynamics of data-driven targeted help
  on job seekers
Modelling the long-term fairness dynamics of data-driven targeted help on job seekers
S. Scher
Simone Kopeinik
A. Trugler
Dominik Kowald
45
9
0
17 Aug 2022
Differentially Private Counterfactuals via Functional Mechanism
Differentially Private Counterfactuals via Functional Mechanism
Fan Yang
Qizhang Feng
Kaixiong Zhou
Jiahao Chen
Xia Hu
32
8
0
04 Aug 2022
FLAIR: Federated Learning Annotated Image Repository
FLAIR: Federated Learning Annotated Image Repository
Congzheng Song
Filip Granqvist
Kunal Talwar
FedML
29
28
0
18 Jul 2022
Hercules: Boosting the Performance of Privacy-preserving Federated
  Learning
Hercules: Boosting the Performance of Privacy-preserving Federated Learning
Guowen Xu
Xingshuo Han
Shengmin Xu
Tianwei Zhang
Hongwei Li
Xinyi Huang
R. Deng
FedML
37
16
0
11 Jul 2022
Pile of Law: Learning Responsible Data Filtering from the Law and a
  256GB Open-Source Legal Dataset
Pile of Law: Learning Responsible Data Filtering from the Law and a 256GB Open-Source Legal Dataset
Peter Henderson
M. Krass
Lucia Zheng
Neel Guha
Christopher D. Manning
Dan Jurafsky
Daniel E. Ho
AILaw
ELM
141
98
0
01 Jul 2022
The Privacy Onion Effect: Memorization is Relative
The Privacy Onion Effect: Memorization is Relative
Nicholas Carlini
Matthew Jagielski
Chiyuan Zhang
Nicolas Papernot
Andreas Terzis
Florian Tramèr
PILM
MIACV
35
102
0
21 Jun 2022
Disparate Impact in Differential Privacy from Gradient Misalignment
Disparate Impact in Differential Privacy from Gradient Misalignment
Maria S. Esipova
Atiyeh Ashari Ghomi
Yaqiao Luo
Jesse C. Cresswell
29
25
0
15 Jun 2022
Challenges in Applying Explainability Methods to Improve the Fairness of
  NLP Models
Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Esma Balkir
S. Kiritchenko
I. Nejadgholi
Kathleen C. Fraser
23
36
0
08 Jun 2022
Differentially Private Shapley Values for Data Evaluation
Differentially Private Shapley Values for Data Evaluation
Lauren Watson
R. Andreeva
Hao Yang
Rik Sarkar
TDI
FAtt
FedML
21
6
0
01 Jun 2022
Pruning has a disparate impact on model accuracy
Pruning has a disparate impact on model accuracy
Cuong Tran
Ferdinando Fioretto
Jung-Eun Kim
Rakshit Naidu
45
38
0
26 May 2022
On the Importance of Architecture and Feature Selection in
  Differentially Private Machine Learning
On the Importance of Architecture and Feature Selection in Differentially Private Machine Learning
Wenxuan Bao
L. A. Bauer
Vincent Bindschaedler
OOD
29
4
0
13 May 2022
Privacy Enhancement for Cloud-Based Few-Shot Learning
Privacy Enhancement for Cloud-Based Few-Shot Learning
Archit Parnami
Muhammad Usama
Liyue Fan
Minwoo Lee
27
1
0
10 May 2022
Decentralized Stochastic Optimization with Inherent Privacy Protection
Decentralized Stochastic Optimization with Inherent Privacy Protection
Yongqiang Wang
H. Vincent Poor
29
37
0
08 May 2022
HBFL: A Hierarchical Blockchain-based Federated Learning Framework for a
  Collaborative IoT Intrusion Detection
HBFL: A Hierarchical Blockchain-based Federated Learning Framework for a Collaborative IoT Intrusion Detection
Mohanad Sarhan
Wai Weng Lo
S. Layeghy
Marius Portmann
28
59
0
08 Apr 2022
Bounding Membership Inference
Bounding Membership Inference
Anvith Thudi
Ilia Shumailov
Franziska Boenisch
Nicolas Papernot
33
18
0
24 Feb 2022
Exploring the Unfairness of DP-SGD Across Settings
Exploring the Unfairness of DP-SGD Across Settings
Frederik Noe
R. Herskind
Anders Søgaard
27
4
0
24 Feb 2022
Differentially Private Speaker Anonymization
Differentially Private Speaker Anonymization
Ali Shahin Shamsabadi
B. M. L. Srivastava
A. Bellet
Nathalie Vauquier
Emmanuel Vincent
Mohamed Maouche
Marc Tommasi
Nicolas Papernot
MIACV
56
33
0
23 Feb 2022
Differential Privacy and Fairness in Decisions and Learning Tasks: A
  Survey
Differential Privacy and Fairness in Decisions and Learning Tasks: A Survey
Ferdinando Fioretto
Cuong Tran
Pascal Van Hentenryck
Keyu Zhu
FaML
32
60
0
16 Feb 2022
Datamodels: Predicting Predictions from Training Data
Datamodels: Predicting Predictions from Training Data
Andrew Ilyas
Sung Min Park
Logan Engstrom
Guillaume Leclerc
A. Madry
TDI
54
131
0
01 Feb 2022
Fishing for User Data in Large-Batch Federated Learning via Gradient
  Magnification
Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification
Yuxin Wen
Jonas Geiping
Liam H. Fowl
Micah Goldblum
Tom Goldstein
FedML
92
93
0
01 Feb 2022
Survey on Federated Learning Threats: concepts, taxonomy on attacks and
  defences, experimental study and challenges
Survey on Federated Learning Threats: concepts, taxonomy on attacks and defences, experimental study and challenges
Nuria Rodríguez-Barroso
Daniel Jiménez López
M. V. Luzón
Francisco Herrera
Eugenio Martínez-Cámara
FedML
37
213
0
20 Jan 2022
Equity and Privacy: More Than Just a Tradeoff
Equity and Privacy: More Than Just a Tradeoff
David Pujol
Ashwin Machanavajjhala
35
15
0
08 Nov 2021
Fairness-Driven Private Collaborative Machine Learning
Fairness-Driven Private Collaborative Machine Learning
Dana Pessach
Tamir Tassa
E. Shmueli
FedML
33
7
0
29 Sep 2021
NanoBatch Privacy: Enabling fast Differentially Private learning on the
  IPU
NanoBatch Privacy: Enabling fast Differentially Private learning on the IPU
Edward H. Lee
M. M. Krell
Alexander Tsyplikhin
Victoria Rege
E. Colak
Kristen W. Yeom
FedML
21
0
0
24 Sep 2021
Robin Hood and Matthew Effects: Differential Privacy Has Disparate
  Impact on Synthetic Data
Robin Hood and Matthew Effects: Differential Privacy Has Disparate Impact on Synthetic Data
Georgi Ganev
Bristena Oprisanu
Emiliano De Cristofaro
42
56
0
23 Sep 2021
Partial sensitivity analysis in differential privacy
Partial sensitivity analysis in differential privacy
Tamara T. Mueller
Alexander Ziller
Dmitrii Usynin
Moritz Knolle
F. Jungmann
Daniel Rueckert
Georgios Kaissis
50
1
0
22 Sep 2021
SoK: Machine Learning Governance
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
43
16
0
20 Sep 2021
A Fairness Analysis on Private Aggregation of Teacher Ensembles
A Fairness Analysis on Private Aggregation of Teacher Ensembles
Cuong Tran
M. H. Dinh
Kyle Beiter
Ferdinando Fioretto
21
12
0
17 Sep 2021
Enforcing fairness in private federated learning via the modified method
  of differential multipliers
Enforcing fairness in private federated learning via the modified method of differential multipliers
Borja Rodríguez Gálvez
Filip Granqvist
Rogier van Dalen
M. Seigel
FedML
48
52
0
17 Sep 2021
Federated Learning Meets Fairness and Differential Privacy
Federated Learning Meets Fairness and Differential Privacy
P. Manisha
Sankarshan Damle
Sujit Gujar
FedML
38
21
0
23 Aug 2021
Privacy-Preserving Machine Learning: Methods, Challenges and Directions
Privacy-Preserving Machine Learning: Methods, Challenges and Directions
Runhua Xu
Nathalie Baracaldo
J. Joshi
32
99
0
10 Aug 2021
A Field Guide to Federated Optimization
A Field Guide to Federated Optimization
Jianyu Wang
Zachary B. Charles
Zheng Xu
Gauri Joshi
H. B. McMahan
...
Mi Zhang
Tong Zhang
Chunxiang Zheng
Chen Zhu
Wennan Zhu
FedML
187
412
0
14 Jul 2021
Smoothed Differential Privacy
Smoothed Differential Privacy
Ao Liu
Yu-Xiang Wang
Lirong Xia
33
0
0
04 Jul 2021
Quantifying Social Biases in NLP: A Generalization and Empirical
  Comparison of Extrinsic Fairness Metrics
Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics
Paula Czarnowska
Yogarshi Vyas
Kashif Shah
21
104
0
28 Jun 2021
Membership Inference on Word Embedding and Beyond
Membership Inference on Word Embedding and Beyond
Saeed Mahloujifar
Huseyin A. Inan
Melissa Chase
Esha Ghosh
Marcello Hasegawa
MIACV
SILM
25
46
0
21 Jun 2021
Accuracy, Interpretability, and Differential Privacy via Explainable
  Boosting
Accuracy, Interpretability, and Differential Privacy via Explainable Boosting
Harsha Nori
R. Caruana
Zhiqi Bu
J. Shen
Janardhan Kulkarni
38
37
0
17 Jun 2021
Optimality and Stability in Federated Learning: A Game-theoretic
  Approach
Optimality and Stability in Federated Learning: A Game-theoretic Approach
Kate Donahue
Jon M. Kleinberg
FedML
13
45
0
17 Jun 2021
A BIC-based Mixture Model Defense against Data Poisoning Attacks on
  Classifiers
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers
Xi Li
David J. Miller
Zhen Xiang
G. Kesidis
AAML
16
0
0
28 May 2021
Model Selection's Disparate Impact in Real-World Deep Learning
  Applications
Model Selection's Disparate Impact in Real-World Deep Learning Applications
Jessica Zosa Forde
A. Feder Cooper
Kweku Kwegyir-Aggrey
Chris De Sa
Michael Littman
11
22
0
01 Apr 2021
Privacy Regularization: Joint Privacy-Utility Optimization in Language
  Models
Privacy Regularization: Joint Privacy-Utility Optimization in Language Models
Fatemehsadat Mireshghallah
Huseyin A. Inan
Marcello Hasegawa
Victor Rühle
Taylor Berg-Kirkpatrick
Robert Sim
19
40
0
12 Mar 2021
Understanding and Mitigating Accuracy Disparity in Regression
Understanding and Mitigating Accuracy Disparity in Regression
Jianfeng Chi
Yuan Tian
Geoffrey J. Gordon
Han Zhao
27
25
0
24 Feb 2021
Exacerbating Algorithmic Bias through Fairness Attacks
Exacerbating Algorithmic Bias through Fairness Attacks
Ninareh Mehrabi
Muhammad Naveed
Fred Morstatter
Aram Galstyan
AAML
28
67
0
16 Dec 2020
Robustness Threats of Differential Privacy
Robustness Threats of Differential Privacy
Nurislam Tursynbek
Aleksandr Petiushko
Ivan Oseledets
AAML
30
13
0
14 Dec 2020
On the Privacy Risks of Algorithmic Fairness
On the Privacy Risks of Algorithmic Fairness
Hong Chang
Reza Shokri
FaML
38
110
0
07 Nov 2020
Neither Private Nor Fair: Impact of Data Imbalance on Utility and
  Fairness in Differential Privacy
Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy
Tom Farrand
Fatemehsadat Mireshghallah
Sahib Singh
Andrew Trask
FedML
11
88
0
10 Sep 2020
What Neural Networks Memorize and Why: Discovering the Long Tail via
  Influence Estimation
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation
Vitaly Feldman
Chiyuan Zhang
TDI
46
443
0
09 Aug 2020
Anonymizing Machine Learning Models
Anonymizing Machine Learning Models
Abigail Goldsteen
Gilad Ezov
Ron Shmelkin
Micha Moffie
Ariel Farkash
MIACV
19
5
0
26 Jul 2020
Reducing Risk of Model Inversion Using Privacy-Guided Training
Reducing Risk of Model Inversion Using Privacy-Guided Training
Abigail Goldsteen
Gilad Ezov
Ariel Farkash
33
4
0
29 Jun 2020
Model Explanations with Differential Privacy
Model Explanations with Differential Privacy
Neel Patel
Reza Shokri
Yair Zick
SILM
FedML
28
32
0
16 Jun 2020
Balance is key: Private median splits yield high-utility random trees
Balance is key: Private median splits yield high-utility random trees
Shorya Consul
Sinead Williamson
17
2
0
15 Jun 2020
Previous
123
Next