ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.03121
  4. Cited By
Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes
  in Emotion Attribution

Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution

5 March 2024
Flor Miriam Plaza del Arco
A. C. Curry
Alba Curry
Gavin Abercrombie
Dirk Hovy
ArXivPDFHTML

Papers citing "Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution"

23 / 23 papers shown
Title
Out of Sight Out of Mind, Out of Sight Out of Mind: Measuring Bias in Language Models Against Overlooked Marginalized Groups in Regional Contexts
Out of Sight Out of Mind, Out of Sight Out of Mind: Measuring Bias in Language Models Against Overlooked Marginalized Groups in Regional Contexts
Fatma Elsafoury
David Hartmann
26
0
0
17 Apr 2025
NoveltyBench: Evaluating Language Models for Humanlike Diversity
NoveltyBench: Evaluating Language Models for Humanlike Diversity
Yiming Zhang
Harshita Diddee
Susan Holm
Hanchen Liu
Xinyue Liu
Vinay Samuel
Barry Wang
Daphne Ippolito
29
1
0
07 Apr 2025
The LLM Wears Prada: Analysing Gender Bias and Stereotypes through Online Shopping Data
The LLM Wears Prada: Analysing Gender Bias and Stereotypes through Online Shopping Data
Massimiliano Luca
Ciro Beneduce
Bruno Lepri
Jacopo Staiano
45
0
0
02 Apr 2025
Toward Lightweight and Fast Decoders for Diffusion Models in Image and Video Generation
Alexey Buzovkin
Evgeny Shilov
VGen
41
0
0
06 Mar 2025
Language Models Predict Empathy Gaps Between Social In-groups and Out-groups
Yu Hou
Hal Daumé III
Rachel Rudinger
31
2
0
02 Mar 2025
Beneath the Surface: How Large Language Models Reflect Hidden Bias
Beneath the Surface: How Large Language Models Reflect Hidden Bias
Jinhao Pan
Chahat Raj
Ziyu Yao
Ziwei Zhu
41
0
0
27 Feb 2025
AI Will Always Love You: Studying Implicit Biases in Romantic AI Companions
AI Will Always Love You: Studying Implicit Biases in Romantic AI Companions
Clare Grogan
Jackie Kay
Maria Perez-Ortiz
57
1
0
27 Feb 2025
Gender Bias in Decision-Making with Large Language Models: A Study of
  Relationship Conflicts
Gender Bias in Decision-Making with Large Language Models: A Study of Relationship Conflicts
Sharon Levy
William D. Adler
T. Karver
Mark Dredze
Michelle R. Kaufman
16
1
0
14 Oct 2024
Evaluating Gender Bias of LLMs in Making Morality Judgements
Evaluating Gender Bias of LLMs in Making Morality Judgements
Divij Bajaj
Yuanyuan Lei
Jonathan Tong
Ruihong Huang
35
2
0
13 Oct 2024
Anti-stereotypical Predictive Text Suggestions Do Not Reliably Yield
  Anti-stereotypical Writing
Anti-stereotypical Predictive Text Suggestions Do Not Reliably Yield Anti-stereotypical Writing
Connor Baumler
Hal Daumé III
21
0
0
30 Sep 2024
"A Woman is More Culturally Knowledgeable than A Man?": The Effect of
  Personas on Cultural Norm Interpretation in LLMs
"A Woman is More Culturally Knowledgeable than A Man?": The Effect of Personas on Cultural Norm Interpretation in LLMs
M. Kamruzzaman
Hieu Minh Nguyen
Nazmul Hassan
Gene Louis Kim
26
1
0
18 Sep 2024
Challenging Fairness: A Comprehensive Exploration of Bias in LLM-Based
  Recommendations
Challenging Fairness: A Comprehensive Exploration of Bias in LLM-Based Recommendations
Shahnewaz Karim Sakib
Anindya Bijoy Das
23
0
0
17 Sep 2024
Covert Bias: The Severity of Social Views' Unalignment in Language
  Models Towards Implicit and Explicit Opinion
Covert Bias: The Severity of Social Views' Unalignment in Language Models Towards Implicit and Explicit Opinion
Abeer Aldayel
Areej Alokaili
Rehab Alahmadi
17
0
0
15 Aug 2024
How Are LLMs Mitigating Stereotyping Harms? Learning from Search Engine
  Studies
How Are LLMs Mitigating Stereotyping Harms? Learning from Search Engine Studies
Alina Leidinger
Richard Rogers
32
5
0
16 Jul 2024
Divine LLaMAs: Bias, Stereotypes, Stigmatization, and Emotion
  Representation of Religion in Large Language Models
Divine LLaMAs: Bias, Stereotypes, Stigmatization, and Emotion Representation of Religion in Large Language Models
Flor Miriam Plaza del Arco
Amanda Cercas Curry
Susanna Paoli
Alba Curry
Dirk Hovy
21
2
0
09 Jul 2024
An Empirical Study of Gendered Stereotypes in Emotional Attributes for
  Bangla in Multilingual Large Language Models
An Empirical Study of Gendered Stereotypes in Emotional Attributes for Bangla in Multilingual Large Language Models
Jayanta Sadhu
Maneesha Rani Saha
Rifat Shahriyar
27
0
0
08 Jul 2024
Helpful assistant or fruitful facilitator? Investigating how personas
  affect language model behavior
Helpful assistant or fruitful facilitator? Investigating how personas affect language model behavior
Pedro Henrique Luz de Araujo
Benjamin Roth
38
3
0
02 Jul 2024
Language Model Council: Democratically Benchmarking Foundation Models on Highly Subjective Tasks
Language Model Council: Democratically Benchmarking Foundation Models on Highly Subjective Tasks
Justin Zhao
Flor Miriam Plaza del Arco
A. C. Curry
Amanda Cercas Curry
ELM
ALM
38
1
0
12 Jun 2024
MBBQ: A Dataset for Cross-Lingual Comparison of Stereotypes in
  Generative LLMs
MBBQ: A Dataset for Cross-Lingual Comparison of Stereotypes in Generative LLMs
Vera Neplenbroek
Arianna Bisazza
Raquel Fernández
34
6
0
11 Jun 2024
Are Models Biased on Text without Gender-related Language?
Are Models Biased on Text without Gender-related Language?
Catarina G Belém
P. Seshadri
Yasaman Razeghi
Sameer Singh
36
8
0
01 May 2024
Emotion Analysis in NLP: Trends, Gaps and Roadmap for Future Directions
Emotion Analysis in NLP: Trends, Gaps and Roadmap for Future Directions
Flor Miriam Plaza del Arco
Alba Curry
A. C. Curry
Dirk Hovy
33
14
0
02 Mar 2024
Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs
Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs
Shashank Gupta
Vaishnavi Shrivastava
A. Deshpande
A. Kalyan
Peter Clark
Ashish Sabharwal
Tushar Khot
120
100
0
08 Nov 2023
The Woman Worked as a Babysitter: On Biases in Language Generation
The Woman Worked as a Babysitter: On Biases in Language Generation
Emily Sheng
Kai-Wei Chang
Premkumar Natarajan
Nanyun Peng
206
615
0
03 Sep 2019
1