ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.08495
  4. Cited By
Large Language Models Portray Socially Subordinate Groups as More
  Homogeneous, Consistent with a Bias Observed in Humans
v1v2 (latest)

Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans

Conference on Fairness, Accountability and Transparency (FAccT), 2024
16 January 2024
Messi H.J. Lee
Jacob M. Montgomery
Calvin K. Lai
ArXiv (abs)PDFHTML

Papers citing "Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in Humans"

18 / 18 papers shown
Title
Who's Asking? Evaluating LLM Robustness to Inquiry Personas in Factual Question Answering
Who's Asking? Evaluating LLM Robustness to Inquiry Personas in Factual Question Answering
Nil-Jana Akpinar
Chia-Jung Lee
Vanessa Murdock
Pietro Perona
108
0
0
14 Oct 2025
Prompt Optimization Across Multiple Agents for Representing Diverse Human Populations
Prompt Optimization Across Multiple Agents for Representing Diverse Human Populations
Manh Hung Nguyen
Sebastian Tschiatschek
Adish Singla
LLMAG
125
0
0
08 Oct 2025
Mitigation of Gender and Ethnicity Bias in AI-Generated Stories through Model Explanations
Mitigation of Gender and Ethnicity Bias in AI-Generated Stories through Model Explanations
Martha O. Dimgba
Sharon Oba
Ameeta Agrawal
Philippe J. Giabbanelli
88
0
0
03 Sep 2025
Bias Amplification in Stable Diffusion's Representation of Stigma Through Skin Tones and Their Homogeneity
Bias Amplification in Stable Diffusion's Representation of Stigma Through Skin Tones and Their Homogeneity
Kyra Wilson
Sourojit Ghosh
Aylin Caliskan
108
0
0
24 Aug 2025
Evolving Collective Cognition in Human-Agent Hybrid Societies: How Agents Form Stances and Boundaries
Evolving Collective Cognition in Human-Agent Hybrid Societies: How Agents Form Stances and Boundaries
Hanzhong Zhang
Muhua Huang
Jindong Wang
80
1
0
24 Aug 2025
The Prompt Makes the Person(a): A Systematic Evaluation of Sociodemographic Persona Prompting for Large Language Models
The Prompt Makes the Person(a): A Systematic Evaluation of Sociodemographic Persona Prompting for Large Language Models
Marlene Lutz
Indira Sen
Georg Ahnert
Elisa Rogers
M. Strohmaier
311
11
0
21 Jul 2025
Large Language Models as Psychological Simulators: A Methodological Guide
Large Language Models as Psychological Simulators: A Methodological Guide
Zhicheng Lin
LLMAG
207
2
0
20 Jun 2025
Position is Power: System Prompts as a Mechanism of Bias in Large Language Models (LLMs)
Position is Power: System Prompts as a Mechanism of Bias in Large Language Models (LLMs)Conference on Fairness, Accountability and Transparency (FAccT), 2025
Anna Neumann
Elisabeth Kirsten
Muhammad Bilal Zafar
Jatinder Singh
320
8
0
27 May 2025
Language Models Surface the Unwritten Code of Science and Society
Language Models Surface the Unwritten Code of Science and Society
Honglin Bao
Siyang Wu
Jiwoong Choi
Yingrong Mao
James A. Evans
354
2
0
25 May 2025
A Tale of Two Identities: An Ethical Audit of Human and AI-Crafted Personas
A Tale of Two Identities: An Ethical Audit of Human and AI-Crafted Personas
Pranav Narayanan Venkit
Jiayi Li
Yingfan Zhou
Sarah Rajtmajer
Shomir Wilson
202
2
0
07 May 2025
Visual Cues of Gender and Race are Associated with Stereotyping in Vision-Language Models
Messi H.J. Lee
Soyeon Jeon
Jacob M. Montgomery
Calvin K. Lai
VLMCoGe
233
1
0
07 Mar 2025
Implicit Bias in LLMs: A Survey
Xinru Lin
Luyang Li
333
10
0
04 Mar 2025
SPeCtrum: A Grounded Framework for Multidimensional Identity Representation in LLM-Based Agent
SPeCtrum: A Grounded Framework for Multidimensional Identity Representation in LLM-Based AgentNorth American Chapter of the Association for Computational Linguistics (NAACL), 2025
Keyeun Lee
Seo Hyeong Kim
Seolhee Lee
Jinsu Eun
Yena Ko
...
Esther Hehsun Kim
Seonghye Cho
Soeun Yang
Eun-mee Kim
Hajin Lim
320
4
0
12 Feb 2025
Actions Speak Louder than Words: Agent Decisions Reveal Implicit Biases in Language Models
Actions Speak Louder than Words: Agent Decisions Reveal Implicit Biases in Language ModelsConference on Fairness, Accountability and Transparency (FAccT), 2025
Yuxuan Li
Hirokazu Shirado
Sauvik Das
183
11
0
29 Jan 2025
Mitigating Propensity Bias of Large Language Models for Recommender Systems
Mitigating Propensity Bias of Large Language Models for Recommender Systems
Guixian Zhang
Guan Yuan
Debo Cheng
Lin Liu
Jiuyong Li
Shichao Zhang
262
22
0
30 Sep 2024
Probability of Differentiation Reveals Brittleness of Homogeneity Bias
  in Large Language Models
Probability of Differentiation Reveals Brittleness of Homogeneity Bias in Large Language Models
Messi H.J. Lee
Calvin K. Lai
90
0
0
10 Jul 2024
More Distinctively Black and Feminine Faces Lead to Increased
  Stereotyping in Vision-Language Models
More Distinctively Black and Feminine Faces Lead to Increased Stereotyping in Vision-Language Models
Messi H.J. Lee
Jacob M. Montgomery
Calvin K. Lai
VLM
176
0
0
22 May 2024
When Collaborative Filtering is not Collaborative: Unfairness of PCA for Recommendations
When Collaborative Filtering is not Collaborative: Unfairness of PCA for Recommendations
David Liu
Jackie Baek
Tina Eliassi-Rad
196
0
0
15 Oct 2023
1