ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.11684
  4. Cited By
Theory-Grounded Measurement of U.S. Social Stereotypes in English
  Language Models

Theory-Grounded Measurement of U.S. Social Stereotypes in English Language Models

North American Chapter of the Association for Computational Linguistics (NAACL), 2022
23 June 2022
Yang Trista Cao
Anna Sotnikova
Hal Daumé
Rachel Rudinger
L. Zou
ArXiv (abs)PDFHTML

Papers citing "Theory-Grounded Measurement of U.S. Social Stereotypes in English Language Models"

30 / 30 papers shown
Title
Artificial Impressions: Evaluating Large Language Model Behavior Through the Lens of Trait Impressions
Artificial Impressions: Evaluating Large Language Model Behavior Through the Lens of Trait Impressions
Nicholas Deas
Kathleen McKeown
113
0
0
10 Oct 2025
Who's Asking? Investigating Bias Through the Lens of Disability Framed Queries in LLMs
Who's Asking? Investigating Bias Through the Lens of Disability Framed Queries in LLMs
Srikant Panda
Vishnu Hari
Kalpana Panda
Amit Agarwal
Hitesh Laxmichand Patel
151
7
0
18 Aug 2025
A Survey on Stereotype Detection in Natural Language Processing
A Survey on Stereotype Detection in Natural Language ProcessingACM Computing Surveys (ACM Comput. Surv.), 2025
Alessandra Teresa Cignarella
Anastasia Giachanou
Els Lefever
188
0
0
23 May 2025
Splits! A Flexible Dataset and Evaluation Framework for Sociocultural Linguistic Investigation
Splits! A Flexible Dataset and Evaluation Framework for Sociocultural Linguistic Investigation
Eylon Caplan
Tania Chakraborty
Dan Goldwasser
220
0
0
06 Apr 2025
BiasEdit: Debiasing Stereotyped Language Models via Model Editing
Xin Xu
Wei Xu
Ningyu Zhang
Julian McAuley
KELM
340
10
0
11 Mar 2025
Language Models Predict Empathy Gaps Between Social In-groups and Out-groupsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2025
Yu Hou
Hal Daumé III
Rachel Rudinger
316
7
0
02 Mar 2025
Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking
Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-CheckingInternational Conference on Human Factors in Computing Systems (CHI), 2025
Greta Warren
Irina Shklovski
Isabelle Augenstein
OffRL
595
26
0
13 Feb 2025
Ethics Whitepaper: Whitepaper on Ethical Research into Large Language
  Models
Ethics Whitepaper: Whitepaper on Ethical Research into Large Language Models
Eddie L. Ungless
Nikolas Vitsakis
Zeerak Talat
James Garforth
Bjorn Ross
Arno Onken
Atoosa Kasirzadeh
Alexandra Birch
258
3
0
17 Oct 2024
With a Grain of SALT: Are LLMs Fair Across Social Dimensions?
With a Grain of SALT: Are LLMs Fair Across Social Dimensions?
Samee Arif
Zohaib Khan
Agha Ali Raza
Awais Athar
248
2
0
16 Oct 2024
On the Influence of Gender and Race in Romantic Relationship Prediction
  from Large Language Models
On the Influence of Gender and Race in Romantic Relationship Prediction from Large Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2024
Abhilasha Sancheti
Haozhe An
Rachel Rudinger
237
0
0
05 Oct 2024
Anti-stereotypical Predictive Text Suggestions Do Not Reliably Yield
  Anti-stereotypical Writing
Anti-stereotypical Predictive Text Suggestions Do Not Reliably Yield Anti-stereotypical Writing
Connor Baumler
Hal Daumé III
197
0
0
30 Sep 2024
A Taxonomy of Stereotype Content in Large Language Models
A Taxonomy of Stereotype Content in Large Language Models
Gandalf Nicolas
Aylin Caliskan
185
2
0
31 Jul 2024
Visual Stereotypes of Autism Spectrum in Janus-Pro-7B, DALL-E, Stable Diffusion, SDXL, FLUX, and Midjourney
Visual Stereotypes of Autism Spectrum in Janus-Pro-7B, DALL-E, Stable Diffusion, SDXL, FLUX, and Midjourney
Maciej Wodziñski
Marcin Rządeczka
Anastazja Szuła
Marta Sokól
Marcin Moskalewicz
DiffM
231
2
0
23 Jul 2024
Who is better at math, Jenny or Jingzhen? Uncovering Stereotypes in
  Large Language Models
Who is better at math, Jenny or Jingzhen? Uncovering Stereotypes in Large Language Models
Zara Siddique
Liam D. Turner
Luis Espinosa-Anke
209
2
0
09 Jul 2024
GPT is Not an Annotator: The Necessity of Human Annotation in Fairness
  Benchmark Construction
GPT is Not an Annotator: The Necessity of Human Annotation in Fairness Benchmark Construction
Virginia K. Felkner
Jennifer A. Thompson
Jonathan May
204
13
0
24 May 2024
Laissez-Faire Harms: Algorithmic Biases in Generative Language Models
Laissez-Faire Harms: Algorithmic Biases in Generative Language Models
Evan Shieh
Faye-Marie Vassel
Cassidy R. Sugimoto
T. Monroe-White
197
6
0
11 Apr 2024
Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes
  in Emotion Attribution
Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution
Flor Miriam Plaza del Arco
Amanda Cercas Curry
Alba Curry
Gavin Abercrombie
Dirk Hovy
467
40
0
05 Mar 2024
Measuring Machine Learning Harms from Stereotypes Requires Understanding Who Is Harmed by Which Errors in What Ways
Measuring Machine Learning Harms from Stereotypes Requires Understanding Who Is Harmed by Which Errors in What WaysConference on Fairness, Accountability and Transparency (FAccT), 2024
Angelina Wang
Xuechunzi Bai
Solon Barocas
Su Lin Blodgett
FaML
193
5
0
06 Feb 2024
Multilingual large language models leak human stereotypes across
  language boundaries
Multilingual large language models leak human stereotypes across language boundaries
Yang Trista Cao
Anna Sotnikova
Jieyu Zhao
Linda X. Zou
Rachel Rudinger
Hal Daumé
PILM
305
14
0
12 Dec 2023
CoMPosT: Characterizing and Evaluating Caricature in LLM Simulations
CoMPosT: Characterizing and Evaluating Caricature in LLM SimulationsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Myra Cheng
Tiziano Piccardi
Diyi Yang
LLMAG
319
105
0
17 Oct 2023
WinoQueer: A Community-in-the-Loop Benchmark for Anti-LGBTQ+ Bias in
  Large Language Models
WinoQueer: A Community-in-the-Loop Benchmark for Anti-LGBTQ+ Bias in Large Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Virginia K. Felkner
Ho-Chun Herbert Chang
Eugene Jang
Jonathan May
OSLM
254
51
0
26 Jun 2023
Marked Personas: Using Natural Language Prompts to Measure Stereotypes
  in Language Models
Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Myra Cheng
Esin Durmus
Dan Jurafsky
233
265
0
29 May 2023
Having Beer after Prayer? Measuring Cultural Bias in Large Language
  Models
Having Beer after Prayer? Measuring Cultural Bias in Large Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Tarek Naous
Michael Joseph Ryan
Alan Ritter
Wei Xu
520
135
0
23 May 2023
disco: a toolkit for Distributional Control of Generative Models
disco: a toolkit for Distributional Control of Generative ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Germán Kruszewski
Jos Rozen
Marc Dymetman
201
4
0
08 Mar 2023
Aligning Language Models with Preferences through f-divergence
  Minimization
Aligning Language Models with Preferences through f-divergence MinimizationInternational Conference on Machine Learning (ICML), 2023
Dongyoung Go
Tomasz Korbak
Germán Kruszewski
Jos Rozen
Nahyeon Ryu
Marc Dymetman
283
107
0
16 Feb 2023
A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the
  Input is Under-Specified?
A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the Input is Under-Specified?
Kathleen C. Fraser
S. Kiritchenko
I. Nejadgholi
DiffM
227
43
0
14 Feb 2023
Consistency is Key: Disentangling Label Variation in Natural Language Processing with Intra-Annotator Agreement
Consistency is Key: Disentangling Label Variation in Natural Language Processing with Intra-Annotator Agreement
Gavin Abercrombie
Tanvi Dinkar
Amanda Cercas Curry
Verena Rieser
Dirk Hovy
245
27
0
25 Jan 2023
Undesirable Biases in NLP: Addressing Challenges of Measurement
Undesirable Biases in NLP: Addressing Challenges of Measurement
Oskar van der Wal
Dominik Bachmann
Alina Leidinger
L. Maanen
Willem H. Zuidema
K. Schulz
432
8
0
24 Nov 2022
A Robust Bias Mitigation Procedure Based on the Stereotype Content Model
A Robust Bias Mitigation Procedure Based on the Stereotype Content Model
Eddie L. Ungless
Amy Rafferty
Hrichika Nag
Bjorn Ross
144
34
0
26 Oct 2022
SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense
  Reasoning Models
SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning ModelsConference of the European Chapter of the Association for Computational Linguistics (EACL), 2022
Haozhe An
Zongxia Li
Jieyu Zhao
Rachel Rudinger
302
30
0
13 Oct 2022
1