ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.01876
  4. Cited By
fairlib: A Unified Framework for Assessing and Improving Classification
  Fairness

fairlib: A Unified Framework for Assessing and Improving Classification Fairness

4 May 2022
Xudong Han
Aili Shen
Yitong Li
Lea Frermann
Timothy Baldwin
Trevor Cohn
    VLM
    FaML
ArXivPDFHTML

Papers citing "fairlib: A Unified Framework for Assessing and Improving Classification Fairness"

11 / 11 papers shown
Title
On Fairness and Stability: Is Estimator Variance a Friend or a Foe?
On Fairness and Stability: Is Estimator Variance a Friend or a Foe?
Falaah Arif Khan
Denys Herasymuk
Julia Stoyanovich
41
2
0
09 Feb 2023
Erasure of Unaligned Attributes from Neural Representations
Erasure of Unaligned Attributes from Neural Representations
Shun Shao
Yftah Ziser
Shay B. Cohen
14
9
0
06 Feb 2023
Systematic Evaluation of Predictive Fairness
Systematic Evaluation of Predictive Fairness
Xudong Han
Aili Shen
Trevor Cohn
Timothy Baldwin
Lea Frermann
32
7
0
17 Oct 2022
MEDFAIR: Benchmarking Fairness for Medical Imaging
MEDFAIR: Benchmarking Fairness for Medical Imaging
Yongshuo Zong
Yongxin Yang
Timothy M. Hospedales
OOD
81
59
0
04 Oct 2022
Optimising Equal Opportunity Fairness in Model Training
Optimising Equal Opportunity Fairness in Model Training
Aili Shen
Xudong Han
Trevor Cohn
Timothy Baldwin
Lea Frermann
FaML
32
28
0
05 May 2022
Gold Doesn't Always Glitter: Spectral Removal of Linear and Nonlinear
  Guarded Attribute Information
Gold Doesn't Always Glitter: Spectral Removal of Linear and Nonlinear Guarded Attribute Information
Shun Shao
Yftah Ziser
Shay B. Cohen
AAML
14
25
0
15 Mar 2022
Towards Equal Opportunity Fairness through Adversarial Learning
Towards Equal Opportunity Fairness through Adversarial Learning
Xudong Han
Timothy Baldwin
Trevor Cohn
FaML
25
8
0
12 Mar 2022
Towards Identifying Social Bias in Dialog Systems: Frame, Datasets, and
  Benchmarks
Towards Identifying Social Bias in Dialog Systems: Frame, Datasets, and Benchmarks
Jingyan Zhou
Deng Jiawen
Fei Mi
Yitong Li
Yasheng Wang
Minlie Huang
Xin Jiang
Qun Liu
Helen Meng
33
31
0
16 Feb 2022
Contrastive Learning for Fair Representations
Contrastive Learning for Fair Representations
Aili Shen
Xudong Han
Trevor Cohn
Timothy Baldwin
Lea Frermann
FaML
42
32
0
22 Sep 2021
Balancing out Bias: Achieving Fairness Through Balanced Training
Balancing out Bias: Achieving Fairness Through Balanced Training
Xudong Han
Timothy Baldwin
Trevor Cohn
26
39
0
16 Sep 2021
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,091
0
24 Oct 2016
1