Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2505.12054
Cited By
GenderBench: Evaluation Suite for Gender Biases in LLMs
17 May 2025
Matúš Pikuliak
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"GenderBench: Evaluation Suite for Gender Biases in LLMs"
18 / 18 papers shown
Title
Large Language Models Still Exhibit Bias in Long Text
Wonje Jeung
Dongjae Jeon
Ashkan Yousefpour
Jonghyun Choi
ALM
93
7
0
23 Oct 2024
LLMs are Biased Teachers: Evaluating LLM Bias in Personalized Education
Iain Xie Weissburg
Sathvika Anand
Sharon Levy
Haewon Jeong
212
8
0
17 Oct 2024
Gender Bias in Decision-Making with Large Language Models: A Study of Relationship Conflicts
Sharon Levy
William D. Adler
T. Karver
Mark Dredze
Michelle R. Kaufman
60
2
0
14 Oct 2024
Evaluating Gender Bias of LLMs in Making Morality Judgements
Divij Bajaj
Yuanyuan Lei
Jonathan Tong
Ruihong Huang
64
5
0
13 Oct 2024
Gemma 2: Improving Open Language Models at a Practical Size
Gemma Team
Gemma Team Morgane Riviere
Shreya Pathak
Pier Giuseppe Sessa
Cassidy Hardin
...
Noah Fiedel
Armand Joulin
Kathleen Kenealy
Robert Dadashi
Alek Andreev
VLM
MoE
OSLM
147
920
0
31 Jul 2024
Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval
Kyra Wilson
Aylin Caliskan
116
22
0
29 Jul 2024
Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution
Flor Miriam Plaza del Arco
Amanda Cercas Curry
Alba Curry
Gavin Abercrombie
Dirk Hovy
105
25
0
05 Mar 2024
Bias in Language Models: Beyond Trick Tests and Toward RUTEd Evaluation
Kristian Lum
Jacy Reese Anthis
Chirag Nagpal
Alex DÁmour
Alexander D’Amour
118
17
0
20 Feb 2024
Evaluating and Mitigating Discrimination in Language Model Decisions
Alex Tamkin
Amanda Askell
Liane Lovitt
Esin Durmus
Nicholas Joseph
Shauna Kravec
Karina Nguyen
Jared Kaplan
Deep Ganguli
92
76
0
06 Dec 2023
Evaluating Large Language Models through Gender and Racial Stereotypes
Ananya Malik
ELM
48
3
0
24 Nov 2023
"Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters
Yixin Wan
George Pu
Jiao Sun
Aparna Garimella
Kai-Wei Chang
Nanyun Peng
116
198
0
13 Oct 2023
BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset
Jiaming Ji
Mickel Liu
Juntao Dai
Xuehai Pan
Chi Zhang
Ce Bian
Chi Zhang
Ruiyang Sun
Yizhou Wang
Yaodong Yang
ALM
98
506
0
10 Jul 2023
A Survey on Gender Bias in Natural Language Processing
Karolina Stañczak
Isabelle Augenstein
88
117
0
28 Dec 2021
BBQ: A Hand-Built Bias Benchmark for Question Answering
Alicia Parrish
Angelica Chen
Nikita Nangia
Vishakh Padmakumar
Jason Phang
Jana Thompson
Phu Mon Htut
Sam Bowman
276
426
0
15 Oct 2021
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Nikita Nangia
Clara Vania
Rasika Bhalerao
Samuel R. Bowman
135
690
0
30 Sep 2020
Social Bias Frames: Reasoning about Social and Power Implications of Language
Maarten Sap
Saadia Gabriel
Lianhui Qin
Dan Jurafsky
Noah A. Smith
Yejin Choi
165
501
0
10 Nov 2019
On Measuring and Mitigating Biased Inferences of Word Embeddings
Sunipa Dev
Tao Li
J. M. Phillips
Vivek Srikumar
105
174
0
25 Aug 2019
Gender Bias in Coreference Resolution
Rachel Rudinger
Jason Naradowsky
Brian Leonard
Benjamin Van Durme
108
646
0
25 Apr 2018
1