ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.14921
  4. Cited By
Gender bias and stereotypes in Large Language Models

Gender bias and stereotypes in Large Language Models

International Conference on Climate Informatics (ICCI), 2023
28 August 2023
Hadas Kotek
Rikker Dockum
David Q. Sun
ArXiv (abs)PDFHTML

Papers citing "Gender bias and stereotypes in Large Language Models"

50 / 149 papers shown
Cross-cultural value alignment frameworks for responsible AI governance: Evidence from China-West comparative analysis
Cross-cultural value alignment frameworks for responsible AI governance: Evidence from China-West comparative analysis
Haijiang Liu
Jinguang Gu
Xun Wu
Daniel Hershcovich
Qiaoling Xiao
172
0
0
21 Nov 2025
Leak@$k$: Unlearning Does Not Make LLMs Forget Under Probabilistic Decoding
Leak@kkk: Unlearning Does Not Make LLMs Forget Under Probabilistic Decoding
Hadi Reisizadeh
Jiajun Ruan
Yiwei Chen
Soumyadeep Pal
Sijia Liu
Mingyi Hong
MU
380
1
0
07 Nov 2025
Controlling Gender Bias in Retrieval via a Backpack Architecture
Controlling Gender Bias in Retrieval via a Backpack Architecture
Amirabbas Afzali
Amirreza Velae
Iman Ahmadi
Mohammad Aliannejadi
KELM
167
0
0
02 Nov 2025
A word association network methodology for evaluating implicit biases in LLMs compared to humans
A word association network methodology for evaluating implicit biases in LLMs compared to humans
Katherine Abramski
Giulio Rossetti
Massimo Stella
186
1
0
28 Oct 2025
Social Simulations with Large Language Model Risk Utopian Illusion
Social Simulations with Large Language Model Risk Utopian Illusion
Ning Bian
Xianpei Han
Hongyu Lin
Baolei Wu
Jun Wang
134
0
0
24 Oct 2025
Evaluating LLMs for Career Guidance: Comparative Analysis of Computing Competency Recommendations Across Ten African Countries
Evaluating LLMs for Career Guidance: Comparative Analysis of Computing Competency Recommendations Across Ten African Countries
Precious Eze
Stephanie Lunn
Bruk Berhane
ELM
151
0
0
20 Oct 2025
Trust in foundation models and GenAI: A geographic perspective
Trust in foundation models and GenAI: A geographic perspective
Grant McKenzie
K. Janowicz
Carsten Kessler
156
0
0
20 Oct 2025
Reproducibility: The New Frontier in AI Governance
Reproducibility: The New Frontier in AI Governance
Israel Mason-Williams
Gabryel Mason-Williams
190
3
0
13 Oct 2025
Who are you, ChatGPT? Personality and Demographic Style in LLM-Generated Content
Who are you, ChatGPT? Personality and Demographic Style in LLM-Generated Content
Dana Sotto Porat
Ella Rabinovich
133
1
0
13 Oct 2025
Cross-Modal Content Optimization for Steering Web Agent Preferences
Cross-Modal Content Optimization for Steering Web Agent Preferences
Tanqiu Jiang
Min Bai
Nikolaos Pappas
Yanjun Qi
Sandesh Swamy
AAML
217
0
0
04 Oct 2025
When Voice Matters: Evidence of Gender Disparity in Positional Bias of SpeechLLMs
When Voice Matters: Evidence of Gender Disparity in Positional Bias of SpeechLLMs
Shree Harsha Bokkahalli Satish
G. Henter
Éva Székely
337
1
0
01 Oct 2025
Large-Scale Constraint Generation - Can LLMs Parse Hundreds of Constraints?
Large-Scale Constraint Generation - Can LLMs Parse Hundreds of Constraints?
Matteo Boffa
Jiaxuan You
202
0
0
28 Sep 2025
Evaluating Bias in Spoken Dialogue LLMs for Real-World Decisions and Recommendations
Evaluating Bias in Spoken Dialogue LLMs for Real-World Decisions and Recommendations
Y. Wu
Tianrui Wang
Yizhou Peng
Yi-Wen Chao
Xuyi Zhuang
Xinsheng Wang
Shunshun Yin
Ziyang Ma
186
2
0
27 Sep 2025
Customizing Visual Emotion Evaluation for MLLMs: An Open-vocabulary, Multifaceted, and Scalable Approach
Customizing Visual Emotion Evaluation for MLLMs: An Open-vocabulary, Multifaceted, and Scalable Approach
Daiqing Wu
Dongbao Yang
Sicheng Zhao
Can Ma
Can Ma
MLLM
227
2
0
26 Sep 2025
Do Bias Benchmarks Generalise? Evidence from Voice-based Evaluation of Gender Bias in SpeechLLMs
Do Bias Benchmarks Generalise? Evidence from Voice-based Evaluation of Gender Bias in SpeechLLMs
Shree Harsha Bokkahalli Satish
G. Henter
Éva Székely
229
2
0
24 Sep 2025
Simulating a Bias Mitigation Scenario in Large Language Models
Simulating a Bias Mitigation Scenario in Large Language Models
Kiana Kiashemshaki
Mohammad Jalili Torkamani
Negin Mahmoudi
Meysam Shirdel Bilehsavar
AI4CE
283
3
0
17 Sep 2025
Gender-Neutral Rewriting in Italian: Models, Approaches, and Trade-offs
Gender-Neutral Rewriting in Italian: Models, Approaches, and Trade-offs
Andrea Piergentili
Beatrice Savoldi
Matteo Negri
L. Bentivogli
MoMe
231
0
0
16 Sep 2025
Who Gets the Mic? Investigating Gender Bias in the Speaker Assignment of a Speech-LLM
Who Gets the Mic? Investigating Gender Bias in the Speaker Assignment of a Speech-LLM
Dariia Puhach
Amir H. Payberah
Éva Székely
187
2
0
19 Aug 2025
Who's Asking? Investigating Bias Through the Lens of Disability Framed Queries in LLMs
Who's Asking? Investigating Bias Through the Lens of Disability Framed Queries in LLMs
Srikant Panda
Vishnu Hari
Kalpana Panda
Amit Agarwal
Hitesh Laxmichand Patel
308
7
0
18 Aug 2025
Vision-Language Models display a strong gender bias
Vision-Language Models display a strong gender bias
Aiswarya Konavoor
Raj Abhijit Dandekar
Rajat Dandekar
Sreedath Panat
CoGe
229
2
0
15 Aug 2025
A Close Reading Approach to Gender Narrative Biases in AI-Generated Stories
A Close Reading Approach to Gender Narrative Biases in AI-Generated Stories
Daniel Raffini
Agnese Macori
Marco Angelini
Tiziana Catarci
117
0
0
13 Aug 2025
Exploring Causal Effect of Social Bias on Faithfulness Hallucinations in Large Language Models
Exploring Causal Effect of Social Bias on Faithfulness Hallucinations in Large Language Models
Zhenliang Zhang
Junzhe Zhang
Xinyu Hu
Huixuan Zhang
Xiaojun Wan
HILM
213
0
0
11 Aug 2025
Augmenting Bias Detection in LLMs Using Topological Data Analysis
Augmenting Bias Detection in LLMs Using Topological Data Analysis
Keshav Varadarajan
Tananun Songdechakraiwut
114
1
0
11 Aug 2025
Investigating Intersectional Bias in Large Language Models using Confidence Disparities in Coreference Resolution
Investigating Intersectional Bias in Large Language Models using Confidence Disparities in Coreference Resolution
Falaah Arif Khan
N. Sivakumar
Yinong Oliver Wang
Katherine Metcalf
Cezanne Camacho
B. Theobald
Luca Zappella
N. Apostoloff
195
2
0
09 Aug 2025
An Empirical Investigation of Gender Stereotype Representation in Large Language Models: The Italian Case
An Empirical Investigation of Gender Stereotype Representation in Large Language Models: The Italian Case
Gioele Giachino
Marco Rondina
A. Vetrò
Riccardo Coppola
Juan Carlos De Martin
208
0
0
25 Jul 2025
AI Should Sense Better, Not Just Scale Bigger: Adaptive Sensing as a Paradigm Shift
AI Should Sense Better, Not Just Scale Bigger: Adaptive Sensing as a Paradigm Shift
Eunsu Baek
Keondo Park
Jeonggil Ko
Min Hwan Oh
Taesik Gong
Hyung-Sin Kim
381
4
0
10 Jul 2025
MIST: Towards Multi-dimensional Implicit BiaS Evaluation of LLMs for Theory of Mind
MIST: Towards Multi-dimensional Implicit BiaS Evaluation of LLMs for Theory of Mind
Yanlin Li
Hao Liu
Huimin Liu
Kun Wang
Y. X. Wei
Yupeng Hu
367
0
0
17 Jun 2025
Fragile Preferences: A Deep Dive Into Order Effects in Large Language Models
Fragile Preferences: A Deep Dive Into Order Effects in Large Language Models
Haonan Yin
Shai Vardi
Vidyanand Choudhary
267
3
0
17 Jun 2025
Ming-Omni: A Unified Multimodal Model for Perception and Generation
Ming-Omni: A Unified Multimodal Model for Perception and Generation
Inclusion AI
Biao Gong
Cheng Zou
C. Zheng
Chunluan Zhou
...
Zipeng Feng
Zhijiang Fang
Zhihao Qiu
Ziyuan Huang
Z. He
MLLMAuLLM
256
2
0
11 Jun 2025
Dissecting Bias in LLMs: A Mechanistic Interpretability Perspective
Dissecting Bias in LLMs: A Mechanistic Interpretability Perspective
Bhavik Chandna
Zubair Bashir
Procheta Sen
333
9
0
05 Jun 2025
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
Is Your Model Fairly Certain? Uncertainty-Aware Fairness Evaluation for LLMs
Yinong Oliver Wang
N. Sivakumar
Falaah Arif Khan
Rin Metcalf Susa
Adam Goliñski
Natalie Mackraz
B. Theobald
Luca Zappella
N. Apostoloff
358
1
0
29 May 2025
The Multilingual Divide and Its Impact on Global AI Safety
The Multilingual Divide and Its Impact on Global AI Safety
Aidan Peppin
Julia Kreutzer
Alice Schoenauer Sebag
Kelly Marchisio
Beyza Ermis
...
Wei-Yin Ko
Ahmet Üstün
Matthias Gallé
Marzieh Fadaee
Sara Hooker
ELM
339
4
0
27 May 2025
Do Large Language Models (Really) Need Statistical Foundations?
Do Large Language Models (Really) Need Statistical Foundations?
Weijie Su
785
5
0
25 May 2025
Position: Language Models Should be Used to Surface the Unwritten Code of Science and Society
Position: Language Models Should be Used to Surface the Unwritten Code of Science and Society
Honglin Bao
Siyang Wu
Jiwoong Choi
Yingrong Mao
James A. Evans
463
2
0
25 May 2025
Ensembling Sparse Autoencoders
Ensembling Sparse Autoencoders
Soham Gadgil
Chris Lin
Su-In Lee
355
1
0
21 May 2025
A Comprehensive Analysis of Large Language Model Outputs: Similarity, Diversity, and Bias
A Comprehensive Analysis of Large Language Model Outputs: Similarity, Diversity, and Bias
Brandon Smith
Mohamed Reda Bouadjenek
Tahsin Alamgir Kheya
Phillip Dawson
S. Aryal
ALMELM
474
8
0
14 May 2025
Detecting Prefix Bias in LLM-based Reward Models
Detecting Prefix Bias in LLM-based Reward ModelsConference on Fairness, Accountability and Transparency (FAccT), 2025
Ashwin Kumar
Yuzi He
Aram H. Markosyan
Bobbie Chern
Imanol Arrieta-Ibarra
361
11
0
13 May 2025
FairTranslate: An English-French Dataset for Gender Bias Evaluation in Machine Translation by Overcoming Gender Binarity
FairTranslate: An English-French Dataset for Gender Bias Evaluation in Machine Translation by Overcoming Gender BinarityConference on Fairness, Accountability and Transparency (FAccT), 2025
Fanny Jourdan
Yannick Chevalier
Cécile Favre
454
5
0
22 Apr 2025
Identifying and Mitigating the Influence of the Prior Distribution in Large Language Models
Identifying and Mitigating the Influence of the Prior Distribution in Large Language Models
Liyi Zhang
Veniamin Veselovsky
R. Thomas McCoy
Thomas Griffiths
215
1
0
17 Apr 2025
Benchmarking Adversarial Robustness to Bias Elicitation in Large Language Models: Scalable Automated Assessment with LLM-as-a-Judge
Benchmarking Adversarial Robustness to Bias Elicitation in Large Language Models: Scalable Automated Assessment with LLM-as-a-JudgeMachine-mediated learning (ML), 2025
Riccardo Cantini
A. Orsino
Massimo Ruggiero
Domenico Talia
AAMLELM
478
20
0
10 Apr 2025
Societal Impacts Research Requires Benchmarks for Creative Composition Tasks
Societal Impacts Research Requires Benchmarks for Creative Composition Tasks
Judy Hanwen Shen
Carlos Guestrin
710
3
0
09 Apr 2025
Investigating and Mitigating Stereotype-aware Unfairness in LLM-based Recommendations
Investigating and Mitigating Stereotype-aware Unfairness in LLM-based Recommendations
Zihuai Zhao
Wenqi Fan
Yao Wu
Qing Li
414
4
0
05 Apr 2025
The LLM Wears Prada: Analysing Gender Bias and Stereotypes through Online Shopping Data
The LLM Wears Prada: Analysing Gender Bias and Stereotypes through Online Shopping Data
Massimiliano Luca
Ciro Beneduce
Bruno Lepri
Jacopo Staiano
323
2
0
02 Apr 2025
CONGRAD:Conflicting Gradient Filtering for Multilingual Preference Alignment
CONGRAD:Conflicting Gradient Filtering for Multilingual Preference Alignment
Jiangnan Li
Thuy-Trang Vu
Christian Herold
Amirhossein Tebbifakhr
Shahram Khadivi
Gholamreza Haffari
505
0
0
31 Mar 2025
Beyond the Reported Cutoff: Where Large Language Models Fall Short on Financial Knowledge
Beyond the Reported Cutoff: Where Large Language Models Fall Short on Financial Knowledge
Agam Shah
Meghaj Tarte
Joshua Zhang
Wei Xu
Sudheer Chava
AIFin
423
3
0
30 Mar 2025
Evaluating how LLM annotations represent diverse views on contentious topics
Evaluating how LLM annotations represent diverse views on contentious topics
Megan A. Brown
Shubham Atreja
Libby Hemphill
Patrick Y. Wu
1.0K
4
0
29 Mar 2025
Interpretable LLM Guardrails via Sparse Representation Steering
Interpretable LLM Guardrails via Sparse Representation Steering
Zeqing He
Peng Kuang
Huiyu Xu
Kui Ren
Wenhui Zhang
Zhixuan Chu
LLMSV
366
2
0
21 Mar 2025
The Model Hears You: Audio Language Model Deployments Should Consider the Principle of Least Privilege
The Model Hears You: Audio Language Model Deployments Should Consider the Principle of Least Privilege
Luxi He
Xiangyu Qi
Michel Liao
Inyoung Cheong
Prateek Mittal
Danqi Chen
Peter Henderson
AuLLM
338
0
0
21 Mar 2025
More Women, Same Stereotypes: Unpacking the Gender Bias Paradox in Large Language Models
More Women, Same Stereotypes: Unpacking the Gender Bias Paradox in Large Language Models
Evan Chen
Run-Jun Zhan
Yan-Bai Lin
Hung-Hsuan Chen
362
3
0
20 Mar 2025
Gender and content bias in Large Language Models: a case study on Google Gemini 2.0 Flash Experimental
Gender and content bias in Large Language Models: a case study on Google Gemini 2.0 Flash Experimental
Roberto Balestri
299
13
0
18 Mar 2025
123
Next
Page 1 of 3