Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2308.14921
Cited By
Gender bias and stereotypes in Large Language Models
28 August 2023
Hadas Kotek
Rikker Dockum
David Q. Sun
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Gender bias and stereotypes in Large Language Models"
24 / 24 papers shown
Title
Investigating and Mitigating Stereotype-aware Unfairness in LLM-based Recommendations
Zihuai Zhao
Wenqi Fan
Yao Wu
Qing Li
75
1
0
05 Apr 2025
CONGRAD:Conflicting Gradient Filtering for Multilingual Preference Alignment
Jiangnan Li
Thuy-Trang Vu
Christian Herold
Amirhossein Tebbifakhr
Shahram Khadivi
Gholamreza Haffari
33
0
0
31 Mar 2025
Gender and content bias in Large Language Models: a case study on Google Gemini 2.0 Flash Experimental
Roberto Balestri
42
0
0
18 Mar 2025
Surveying Attitudinal Alignment Between Large Language Models Vs. Humans Towards 17 Sustainable Development Goals
Qingyang Wu
Ying Xu
Tingsong Xiao
Yunze Xiao
Yitong Li
...
Yichi Zhang
Shanghai Zhong
Yuwei Zhang
Wei Lu
Yifan Yang
73
1
0
17 Jan 2025
Revisiting Rogers' Paradox in the Context of Human-AI Interaction
K. M. Collins
Umang Bhatt
Ilia Sucholutsky
42
1
0
16 Jan 2025
Towards Effective Discrimination Testing for Generative AI
Thomas P. Zollo
Nikita Rajaneesh
Richard Zemel
Talia B. Gillis
Emily Black
30
1
0
31 Dec 2024
Understanding Multimodal LLMs: the Mechanistic Interpretability of Llava in Visual Question Answering
Zeping Yu
Sophia Ananiadou
109
0
0
17 Nov 2024
BIG5-CHAT: Shaping LLM Personalities Through Training on Human-Grounded Data
Wenkai Li
Jiarui Liu
Andy Liu
Xuhui Zhou
Mona Diab
Maarten Sap
48
6
0
21 Oct 2024
One Language, Many Gaps: Evaluating Dialect Fairness and Robustness of Large Language Models in Reasoning Tasks
Fangru Lin
Shaoguang Mao
Emanuele La Malfa
Valentin Hofmann
Adrian de Wynter
Jing Yao
Si-Qing Chen
Michael Wooldridge
Furu Wei
Furu Wei
49
2
0
14 Oct 2024
Uncertainty-aware Reward Model: Teaching Reward Models to Know What is Unknown
Xingzhou Lou
Dong Yan
Wei Shen
Yuzi Yan
Jian Xie
Junge Zhang
45
21
0
01 Oct 2024
Evaluating Gender, Racial, and Age Biases in Large Language Models: A Comparative Analysis of Occupational and Crime Scenarios
Vishal Mirza
Rahul Kulkarni
Aakanksha Jadhav
47
2
0
22 Sep 2024
A Catalog of Fairness-Aware Practices in Machine Learning Engineering
Gianmario Voria
Giulia Sellitto
Carmine Ferrara
Francesco Abate
A. Lucia
F. Ferrucci
Gemma Catolino
Fabio Palomba
FaML
29
3
0
29 Aug 2024
Students' Perceived Roles, Opportunities, and Challenges of a Generative AI-powered Teachable Agent: A Case of Middle School Math Class
Yukyeong Song
Jinhee Kim
Zifeng Liu
Chenglu Li
Wanli Xing
28
1
0
26 Aug 2024
GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models
Kunsheng Tang
Wenbo Zhou
Jie Zhang
Aishan Liu
Gelei Deng
Shuai Li
Peigui Qi
Weiming Zhang
Tianwei Zhang
Nenghai Yu
37
3
0
22 Aug 2024
Hire Me or Not? Examining Language Model's Behavior with Occupation Attributes
Damin Zhang
Yi Zhang
Geetanjali Bihani
Julia Taylor Rayz
48
2
0
06 May 2024
Large Language Models (LLMs) as Agents for Augmented Democracy
Jairo Gudiño-Rosero
Umberto Grandi
César A. Hidalgo
LLMAG
27
127
0
06 May 2024
Heterogeneous Contrastive Learning for Foundation Models and Beyond
Lecheng Zheng
Baoyu Jing
Zihao Li
Hanghang Tong
Jingrui He
VLM
26
19
0
30 Mar 2024
What's in a Name? Auditing Large Language Models for Race and Gender Bias
Amit Haim
Alejandro Salinas
Julian Nyarko
43
32
0
21 Feb 2024
Causal Learning for Trustworthy Recommender Systems: A Survey
Jin Li
Shoujin Wang
Qi Zhang
LongBing Cao
Fang Chen
Xiuzhen Zhang
Dietmar Jannach
Charu C. Aggarwal
CML
37
1
0
13 Feb 2024
Prompting Fairness: Artificial Intelligence as Game Players
Jazmia Henry
17
0
0
08 Feb 2024
How Generative AI models such as ChatGPT can be (Mis)Used in SPC Practice, Education, and Research? An Exploratory Study
F. Megahed
Ying-Ju Chen
Joshua A. Ferris
S. Knoth
L. A. Jones‐Farmer
45
117
0
17 Feb 2023
ChatGPT Makes Medicine Easy to Swallow: An Exploratory Case Study on Simplified Radiology Reports
Katharina Jeblick
B. Schachtner
Jakob Dexl
Andreas Mittermeier
Anna Theresa Stüber
...
Tobias Weber
Philipp Wesp
B. Sabel
J. Ricke
Michael Ingrisch
LM&MA
MedIm
108
373
0
30 Dec 2022
"I'm sorry to hear that": Finding New Biases in Language Models with a Holistic Descriptor Dataset
Eric Michael Smith
Melissa Hall
Melanie Kambadur
Eleonora Presani
Adina Williams
65
129
0
18 May 2022
The Woman Worked as a Babysitter: On Biases in Language Generation
Emily Sheng
Kai-Wei Chang
Premkumar Natarajan
Nanyun Peng
206
616
0
03 Sep 2019
1