Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2109.08253
Cited By
Balancing out Bias: Achieving Fairness Through Balanced Training
16 September 2021
Xudong Han
Timothy Baldwin
Trevor Cohn
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Balancing out Bias: Achieving Fairness Through Balanced Training"
24 / 24 papers shown
Title
Deep Fair Learning: A Unified Framework for Fine-tuning Representations with Sufficient Networks
Enze Shi
Linglong Kong
Bei Jiang
FaML
FedML
59
0
0
08 Apr 2025
Fair Text Classification via Transferable Representations
Thibaud Leteno
Michael Perrot
Charlotte Laclau
Antoine Gourru
Christophe Gravier
FaML
83
0
0
10 Mar 2025
Understanding and Mitigating Gender Bias in LLMs via Interpretable Neuron Editing
Zeping Yu
Sophia Ananiadou
KELM
43
1
0
24 Jan 2025
An Active Learning Framework for Inclusive Generation by Large Language Models
Sabit Hassan
Anthony Sicilia
Malihe Alikhani
25
1
0
17 Oct 2024
Active Learning for Robust and Representative LLM Generation in Safety-Critical Scenarios
Sabit Hassan
Anthony Sicilia
Malihe Alikhani
26
2
0
14 Oct 2024
MABR: Multilayer Adversarial Bias Removal Without Prior Bias Knowledge
Maxwell J. Yin
Boyu Wang
Charles X. Ling
21
0
0
10 Aug 2024
The Mismeasure of Man and Models: Evaluating Allocational Harms in Large Language Models
Hannah Chen
Yangfeng Ji
David E. Evans
30
0
0
02 Aug 2024
Balancing the Scales: Reinforcement Learning for Fair Classification
Leon Eshuijs
Shihan Wang
Antske Fokkens
FaML
30
0
0
15 Jul 2024
Deconstructing The Ethics of Large Language Models from Long-standing Issues to New-emerging Dilemmas
Chengyuan Deng
Yiqun Duan
Xin Jin
Heng Chang
Yijun Tian
...
Kuofeng Gao
Sihong He
Jun Zhuang
Lu Cheng
Haohan Wang
AILaw
38
16
0
08 Jun 2024
Unifying Bias and Unfairness in Information Retrieval: A Survey of Challenges and Opportunities with Large Language Models
Sunhao Dai
Chen Xu
Shicheng Xu
Liang Pang
Zhenhua Dong
Jun Xu
42
59
0
17 Apr 2024
Addressing Both Statistical and Causal Gender Fairness in NLP Models
Hannah Chen
Yangfeng Ji
David E. Evans
26
2
0
30 Mar 2024
Potential and Challenges of Model Editing for Social Debiasing
Jianhao Yan
Futing Wang
Yafu Li
Yue Zhang
KELM
60
9
0
21 Feb 2024
A Note on Bias to Complete
Jia Xu
Mona Diab
41
2
0
18 Feb 2024
A Group Fairness Lens for Large Language Models
Guanqun Bi
Lei Shen
Yuqiang Xie
Yanan Cao
Tiangang Zhu
Xiao-feng He
ALM
26
4
0
24 Dec 2023
Tackling Bias in Pre-trained Language Models: Current Trends and Under-represented Societies
Vithya Yogarajan
Gillian Dobbie
Te Taka Keegan
R. Neuwirth
ALM
37
11
0
03 Dec 2023
Fair Text Classification with Wasserstein Independence
Thibaud Leteno
Antoine Gourru
Charlotte Laclau
Rémi Emonet
Christophe Gravier
FaML
24
2
0
21 Nov 2023
Boosting Fair Classifier Generalization through Adaptive Priority Reweighing
Zhihao Hu
Yiran Xu
Mengnan Du
Jindong Gu
Xinmei Tian
Fengxiang He
20
1
0
15 Sep 2023
Bias and Fairness in Large Language Models: A Survey
Isabel O. Gallegos
Ryan A. Rossi
Joe Barrow
Md Mehrab Tanjim
Sungchul Kim
Franck Dernoncourt
Tong Yu
Ruiyi Zhang
Nesreen Ahmed
AILaw
19
485
0
02 Sep 2023
A Survey on Fairness in Large Language Models
Yingji Li
Mengnan Du
Rui Song
Xin Wang
Ying Wang
ALM
37
59
0
20 Aug 2023
Sociodemographic Bias in Language Models: A Survey and Forward Path
Vipul Gupta
Pranav Narayanan Venkit
Shomir Wilson
R. Passonneau
42
20
0
13 Jun 2023
Fair Enough: Standardizing Evaluation and Model Selection for Fairness Research in NLP
Xudong Han
Timothy Baldwin
Trevor Cohn
24
12
0
11 Feb 2023
Interpreting Unfairness in Graph Neural Networks via Training Node Attribution
Yushun Dong
Song Wang
Jing Ma
Ninghao Liu
Jundong Li
40
21
0
25 Nov 2022
MEDFAIR: Benchmarking Fairness for Medical Imaging
Yongshuo Zong
Yongxin Yang
Timothy M. Hospedales
OOD
71
58
0
04 Oct 2022
Evaluating Debiasing Techniques for Intersectional Biases
Shivashankar Subramanian
Xudong Han
Timothy Baldwin
Trevor Cohn
Lea Frermann
82
49
0
21 Sep 2021
1