ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.10153
  4. Cited By
On the Independence of Association Bias and Empirical Fairness in
  Language Models

On the Independence of Association Bias and Empirical Fairness in Language Models

20 April 2023
Laura Cabello
Anna Katrine van Zee
Anders Søgaard
ArXivPDFHTML

Papers citing "On the Independence of Association Bias and Empirical Fairness in Language Models"

22 / 22 papers shown
Title
BiasGuard: A Reasoning-enhanced Bias Detection Tool For Large Language Models
BiasGuard: A Reasoning-enhanced Bias Detection Tool For Large Language Models
Zhiting Fan
Ruizhe Chen
Zuozhu Liu
44
0
0
30 Apr 2025
Fair Text Classification via Transferable Representations
Thibaud Leteno
Michael Perrot
Charlotte Laclau
Antoine Gourru
Christophe Gravier
FaML
83
0
0
10 Mar 2025
How far can bias go? -- Tracing bias from pretraining data to alignment
How far can bias go? -- Tracing bias from pretraining data to alignment
Marion Thaler
Abdullatif Köksal
Alina Leidinger
Anna Korhonen
Hinrich Schutze
69
0
0
28 Nov 2024
FairMT-Bench: Benchmarking Fairness for Multi-turn Dialogue in
  Conversational LLMs
FairMT-Bench: Benchmarking Fairness for Multi-turn Dialogue in Conversational LLMs
Zhiting Fan
Ruizhe Chen
Tianxiang Hu
Zuozhu Liu
21
7
0
25 Oct 2024
Ethics Whitepaper: Whitepaper on Ethical Research into Large Language
  Models
Ethics Whitepaper: Whitepaper on Ethical Research into Large Language Models
Eddie L. Ungless
Nikolas Vitsakis
Zeerak Talat
James Garforth
Bjorn Ross
Arno Onken
Atoosa Kasirzadeh
Alexandra Birch
25
1
0
17 Oct 2024
Investigating Implicit Bias in Large Language Models: A Large-Scale
  Study of Over 50 LLMs
Investigating Implicit Bias in Large Language Models: A Large-Scale Study of Over 50 LLMs
Divyanshu Kumar
Umang Jain
Sahil Agarwal
P. Harshangi
23
4
0
13 Oct 2024
BiasAlert: A Plug-and-play Tool for Social Bias Detection in LLMs
BiasAlert: A Plug-and-play Tool for Social Bias Detection in LLMs
Zhiting Fan
Ruizhe Chen
Ruiling Xu
Zuozhu Liu
KELM
16
15
0
14 Jul 2024
Why Don't Prompt-Based Fairness Metrics Correlate?
Why Don't Prompt-Based Fairness Metrics Correlate?
A. Zayed
Gonçalo Mordido
Ioana Baldini
Sarath Chandar
ALM
39
4
0
09 Jun 2024
Exploring Subjectivity for more Human-Centric Assessment of Social
  Biases in Large Language Models
Exploring Subjectivity for more Human-Centric Assessment of Social Biases in Large Language Models
Paula Akemi Aoyagui
Sharon Ferguson
Anastasia Kuzminykh
35
0
0
17 May 2024
Fairness in Large Language Models: A Taxonomic Survey
Fairness in Large Language Models: A Taxonomic Survey
Zhibo Chu
Zichong Wang
Wenbin Zhang
AILaw
39
31
0
31 Mar 2024
Addressing Both Statistical and Causal Gender Fairness in NLP Models
Addressing Both Statistical and Causal Gender Fairness in NLP Models
Hannah Chen
Yangfeng Ji
David E. Evans
16
2
0
30 Mar 2024
Detecting Bias in Large Language Models: Fine-tuned KcBERT
Detecting Bias in Large Language Models: Fine-tuned KcBERT
J. K. Lee
T. M. Chung
24
0
0
16 Mar 2024
Fairness Certification for Natural Language Processing and Large
  Language Models
Fairness Certification for Natural Language Processing and Large Language Models
Vincent Freiberger
Erik Buchmann
26
2
0
02 Jan 2024
A Group Fairness Lens for Large Language Models
A Group Fairness Lens for Large Language Models
Guanqun Bi
Lei Shen
Yuqiang Xie
Yanan Cao
Tiangang Zhu
Xiao-feng He
ALM
19
4
0
24 Dec 2023
Tackling Bias in Pre-trained Language Models: Current Trends and
  Under-represented Societies
Tackling Bias in Pre-trained Language Models: Current Trends and Under-represented Societies
Vithya Yogarajan
Gillian Dobbie
Te Taka Keegan
R. Neuwirth
ALM
37
11
0
03 Dec 2023
Fair Text Classification with Wasserstein Independence
Fair Text Classification with Wasserstein Independence
Thibaud Leteno
Antoine Gourru
Charlotte Laclau
Rémi Emonet
Christophe Gravier
FaML
14
2
0
21 Nov 2023
Evaluating Bias and Fairness in Gender-Neutral Pretrained
  Vision-and-Language Models
Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models
Laura Cabello
Emanuele Bugliarello
Stephanie Brandl
Desmond Elliott
20
7
0
26 Oct 2023
On the Interplay between Fairness and Explainability
On the Interplay between Fairness and Explainability
Stephanie Brandl
Emanuele Bugliarello
Ilias Chalkidis
FaML
12
4
0
25 Oct 2023
Factual and Personalized Recommendations using Language Models and
  Reinforcement Learning
Factual and Personalized Recommendations using Language Models and Reinforcement Learning
Jihwan Jeong
Yinlam Chow
Guy Tennenholtz
Chih-Wei Hsu
Azamat Tulepbergenov
Mohammad Ghavamzadeh
Craig Boutilier
18
4
0
09 Oct 2023
Bias and Fairness in Large Language Models: A Survey
Bias and Fairness in Large Language Models: A Survey
Isabel O. Gallegos
Ryan A. Rossi
Joe Barrow
Md Mehrab Tanjim
Sungchul Kim
Franck Dernoncourt
Tong Yu
Ruiyi Zhang
Nesreen Ahmed
AILaw
16
478
0
02 Sep 2023
Log-linear Guardedness and its Implications
Log-linear Guardedness and its Implications
Shauli Ravfogel
Yoav Goldberg
Ryan Cotterell
23
2
0
18 Oct 2022
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
286
4,143
0
23 Aug 2019
1