ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.03175
  4. Cited By
Weak Proxies are Sufficient and Preferable for Fairness with Missing
  Sensitive Attributes

Weak Proxies are Sufficient and Preferable for Fairness with Missing Sensitive Attributes

6 October 2022
Zhaowei Zhu
Yuanshun Yao
Jiankai Sun
Hanguang Li
Y. Liu
ArXivPDFHTML

Papers citing "Weak Proxies are Sufficient and Preferable for Fairness with Missing Sensitive Attributes"

13 / 13 papers shown
Title
Constructing Fair Latent Space for Intersection of Fairness and Explainability
Constructing Fair Latent Space for Intersection of Fairness and Explainability
Hyungjun Joo
Hyeonggeun Han
Sehwan Kim
Sangwoo Hong
Jungwoo Lee
29
0
0
23 Dec 2024
Alpha and Prejudice: Improving $α$-sized Worst-case Fairness via
  Intrinsic Reweighting
Alpha and Prejudice: Improving ααα-sized Worst-case Fairness via Intrinsic Reweighting
Jing Li
Yinghua Yao
Yuangang Pan
Xuanqian Wang
Ivor Tsang
Xiuju Fu
FaML
23
0
0
05 Nov 2024
Fairness Risks for Group-conditionally Missing Demographics
Fairness Risks for Group-conditionally Missing Demographics
Kaiqi Jiang
Wenzhe Fan
Mao Li
Xinhua Zhang
67
0
0
20 Feb 2024
Distributionally Robust Post-hoc Classifiers under Prior Shifts
Distributionally Robust Post-hoc Classifiers under Prior Shifts
Jiaheng Wei
Harikrishna Narasimhan
Ehsan Amid
Wenjun Chu
Yang Liu
Abhishek Kumar
OOD
35
16
0
16 Sep 2023
Hyper-parameter Tuning for Fair Classification without Sensitive
  Attribute Access
Hyper-parameter Tuning for Fair Classification without Sensitive Attribute Access
A. Veldanda
Ivan Brugere
Sanghamitra Dutta
Alan Mishler
S. Garg
17
5
0
02 Feb 2023
Mitigating Neural Network Overconfidence with Logit Normalization
Mitigating Neural Network Overconfidence with Logit Normalization
Hongxin Wei
Renchunzi Xie
Hao-Ran Cheng
Lei Feng
Bo An
Yixuan Li
OODD
163
258
0
19 May 2022
Detecting Corrupted Labels Without Training a Model to Predict
Detecting Corrupted Labels Without Training a Model to Predict
Zhaowei Zhu
Zihao Dong
Yang Liu
NoLa
141
61
0
12 Oct 2021
The Rich Get Richer: Disparate Impact of Semi-Supervised Learning
The Rich Get Richer: Disparate Impact of Semi-Supervised Learning
Zhaowei Zhu
Tianyi Luo
Yang Liu
148
39
0
12 Oct 2021
Evaluating Fairness of Machine Learning Models Under Uncertain and
  Incomplete Information
Evaluating Fairness of Machine Learning Models Under Uncertain and Incomplete Information
Pranjal Awasthi
Alex Beutel
Matthaeus Kleindessner
Jamie Morgenstern
Xuezhi Wang
FaML
49
55
0
16 Feb 2021
Combating noisy labels by agreement: A joint training method with
  co-regularization
Combating noisy labels by agreement: A joint training method with co-regularization
Hongxin Wei
Lei Feng
Xiangyu Chen
Bo An
NoLa
303
488
0
05 Mar 2020
Improving fairness in machine learning systems: What do industry
  practitioners need?
Improving fairness in machine learning systems: What do industry practitioners need?
Kenneth Holstein
Jennifer Wortman Vaughan
Hal Daumé
Miroslav Dudík
Hanna M. Wallach
FaML
HAI
192
730
0
13 Dec 2018
Learning Adversarially Fair and Transferable Representations
Learning Adversarially Fair and Transferable Representations
David Madras
Elliot Creager
T. Pitassi
R. Zemel
FaML
208
663
0
17 Feb 2018
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
185
2,079
0
24 Oct 2016
1