Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2210.03175
Cited By
Weak Proxies are Sufficient and Preferable for Fairness with Missing Sensitive Attributes
6 October 2022
Zhaowei Zhu
Yuanshun Yao
Jiankai Sun
Hanguang Li
Y. Liu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Weak Proxies are Sufficient and Preferable for Fairness with Missing Sensitive Attributes"
13 / 13 papers shown
Title
Constructing Fair Latent Space for Intersection of Fairness and Explainability
Hyungjun Joo
Hyeonggeun Han
Sehwan Kim
Sangwoo Hong
Jungwoo Lee
29
0
0
23 Dec 2024
Alpha and Prejudice: Improving
α
α
α
-sized Worst-case Fairness via Intrinsic Reweighting
Jing Li
Yinghua Yao
Yuangang Pan
Xuanqian Wang
Ivor Tsang
Xiuju Fu
FaML
23
0
0
05 Nov 2024
Fairness Risks for Group-conditionally Missing Demographics
Kaiqi Jiang
Wenzhe Fan
Mao Li
Xinhua Zhang
67
0
0
20 Feb 2024
Distributionally Robust Post-hoc Classifiers under Prior Shifts
Jiaheng Wei
Harikrishna Narasimhan
Ehsan Amid
Wenjun Chu
Yang Liu
Abhishek Kumar
OOD
35
16
0
16 Sep 2023
Hyper-parameter Tuning for Fair Classification without Sensitive Attribute Access
A. Veldanda
Ivan Brugere
Sanghamitra Dutta
Alan Mishler
S. Garg
17
5
0
02 Feb 2023
Mitigating Neural Network Overconfidence with Logit Normalization
Hongxin Wei
Renchunzi Xie
Hao-Ran Cheng
Lei Feng
Bo An
Yixuan Li
OODD
163
258
0
19 May 2022
Detecting Corrupted Labels Without Training a Model to Predict
Zhaowei Zhu
Zihao Dong
Yang Liu
NoLa
141
61
0
12 Oct 2021
The Rich Get Richer: Disparate Impact of Semi-Supervised Learning
Zhaowei Zhu
Tianyi Luo
Yang Liu
148
39
0
12 Oct 2021
Evaluating Fairness of Machine Learning Models Under Uncertain and Incomplete Information
Pranjal Awasthi
Alex Beutel
Matthaeus Kleindessner
Jamie Morgenstern
Xuezhi Wang
FaML
49
55
0
16 Feb 2021
Combating noisy labels by agreement: A joint training method with co-regularization
Hongxin Wei
Lei Feng
Xiangyu Chen
Bo An
NoLa
303
488
0
05 Mar 2020
Improving fairness in machine learning systems: What do industry practitioners need?
Kenneth Holstein
Jennifer Wortman Vaughan
Hal Daumé
Miroslav Dudík
Hanna M. Wallach
FaML
HAI
192
730
0
13 Dec 2018
Learning Adversarially Fair and Transferable Representations
David Madras
Elliot Creager
T. Pitassi
R. Zemel
FaML
208
663
0
17 Feb 2018
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
185
2,079
0
24 Oct 2016
1