ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.04423
  4. Cited By
Multiaccurate Proxies for Downstream Fairness

Multiaccurate Proxies for Downstream Fairness

9 July 2021
Emily Diana
Wesley Gill
Michael Kearns
K. Kenthapadi
Aaron Roth
Saeed Sharifi-Malvajerdi
ArXivPDFHTML

Papers citing "Multiaccurate Proxies for Downstream Fairness"

18 / 18 papers shown
Title
Fairness without Sensitive Attributes via Knowledge Sharing
Fairness without Sensitive Attributes via Knowledge Sharing
Hongliang Ni
Lei Han
Tong Chen
S. Sadiq
Gianluca Demartini
26
2
0
27 Sep 2024
Dancing in the Shadows: Harnessing Ambiguity for Fairer Classifiers
Dancing in the Shadows: Harnessing Ambiguity for Fairer Classifiers
Ainhize Barrainkua
Paula Gordaliza
Jose A. Lozano
Novi Quadrianto
19
0
0
27 Jun 2024
Toward the Tradeoffs between Privacy, Fairness and Utility in Federated
  Learning
Toward the Tradeoffs between Privacy, Fairness and Utility in Federated Learning
Kangkang Sun
Xiaojin Zhang
Xi Lin
Gaolei Li
Jing Wang
Jianhua Li
25
4
0
30 Nov 2023
Fairness Under Demographic Scarce Regime
Fairness Under Demographic Scarce Regime
Patrik Joslin Kenfack
Samira Ebrahimi Kahou
Ulrich Aivodji
28
3
0
24 Jul 2023
Balanced Filtering via Disclosure-Controlled Proxies
Balanced Filtering via Disclosure-Controlled Proxies
Siqi Deng
Emily Diana
Michael Kearns
Aaron Roth
17
0
0
26 Jun 2023
Ground(less) Truth: A Causal Framework for Proxy Labels in
  Human-Algorithm Decision-Making
Ground(less) Truth: A Causal Framework for Proxy Labels in Human-Algorithm Decision-Making
Luke M. Guerdan
Amanda Coston
Zhiwei Steven Wu
Kenneth Holstein
CML
17
26
0
13 Feb 2023
Comparative Learning: A Sample Complexity Theory for Two Hypothesis
  Classes
Comparative Learning: A Sample Complexity Theory for Two Hypothesis Classes
Lunjia Hu
Charlotte Peale
14
6
0
16 Nov 2022
Weak Proxies are Sufficient and Preferable for Fairness with Missing
  Sensitive Attributes
Weak Proxies are Sufficient and Preferable for Fairness with Missing Sensitive Attributes
Zhaowei Zhu
Yuanshun Yao
Jiankai Sun
Hanguang Li
Y. Liu
14
21
0
06 Oct 2022
Multicalibrated Regression for Downstream Fairness
Multicalibrated Regression for Downstream Fairness
Ira Globus-Harris
Varun Gupta
Christopher Jung
Michael Kearns
Jamie Morgenstern
Aaron Roth
FaML
45
11
0
15 Sep 2022
Bias Mitigation for Machine Learning Classifiers: A Comprehensive Survey
Bias Mitigation for Machine Learning Classifiers: A Comprehensive Survey
Max Hort
Zhenpeng Chen
Jie M. Zhang
Mark Harman
Federica Sarro
FaML
AI4CE
26
159
0
14 Jul 2022
"You Can't Fix What You Can't Measure": Privately Measuring Demographic
  Performance Disparities in Federated Learning
"You Can't Fix What You Can't Measure": Privately Measuring Demographic Performance Disparities in Federated Learning
Marc Juárez
Aleksandra Korolova
FedML
28
9
0
24 Jun 2022
Context matters for fairness -- a case study on the effect of spatial
  distribution shifts
Context matters for fairness -- a case study on the effect of spatial distribution shifts
Siamak Ghodsi
Harith Alani
Eirini Ntoutsi
7
2
0
23 Jun 2022
Distributionally Robust Data Join
Distributionally Robust Data Join
Pranjal Awasthi
Christopher Jung
Jamie Morgenstern
OOD
9
3
0
11 Feb 2022
Simple and near-optimal algorithms for hidden stratification and
  multi-group learning
Simple and near-optimal algorithms for hidden stratification and multi-group learning
Abdoreza Asadpour
Daniel J. Hsu
89
20
0
22 Dec 2021
The Rich Get Richer: Disparate Impact of Semi-Supervised Learning
The Rich Get Richer: Disparate Impact of Semi-Supervised Learning
Zhaowei Zhu
Tianyi Luo
Yang Liu
148
39
0
12 Oct 2021
Lexicographically Fair Learning: Algorithms and Generalization
Lexicographically Fair Learning: Algorithms and Generalization
Emily Diana
Wesley Gill
Ira Globus-Harris
Michael Kearns
Aaron Roth
Saeed Sharifi-Malvajerdi
FedML
FaML
65
9
0
16 Feb 2021
Evaluating Fairness of Machine Learning Models Under Uncertain and
  Incomplete Information
Evaluating Fairness of Machine Learning Models Under Uncertain and Incomplete Information
Pranjal Awasthi
Alex Beutel
Matthaeus Kleindessner
Jamie Morgenstern
Xuezhi Wang
FaML
54
55
0
16 Feb 2021
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
185
2,082
0
24 Oct 2016
1