ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.18189
  4. Cited By
Marked Personas: Using Natural Language Prompts to Measure Stereotypes
  in Language Models

Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models

Annual Meeting of the Association for Computational Linguistics (ACL), 2023
29 May 2023
Myra Cheng
Esin Durmus
Dan Jurafsky
ArXiv (abs)PDFHTML

Papers citing "Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models"

45 / 145 papers shown
Social Skill Training with Large Language Models
Social Skill Training with Large Language Models
Diyi Yang
Caleb Ziems
William B. Held
Omar Shaikh
Michael S. Bernstein
John C. Mitchell
LLMAG
179
19
0
05 Apr 2024
Do Large Language Models Rank Fairly? An Empirical Study on the Fairness
  of LLMs as Rankers
Do Large Language Models Rank Fairly? An Empirical Study on the Fairness of LLMs as RankersNorth American Chapter of the Association for Computational Linguistics (NAACL), 2024
Yuan Wang
Xuyang Wu
Hsin-Tai Wu
Zhiqiang Tao
Yi Fang
ALM
286
18
0
04 Apr 2024
Template-Based Probes Are Imperfect Lenses for Counterfactual Bias Evaluation in LLMs
Template-Based Probes Are Imperfect Lenses for Counterfactual Bias Evaluation in LLMs
Farnaz Kohankhaki
D. B. Emerson
David B. Emerson
Laleh Seyyed-Kalantari
Faiza Khan Khattak
392
2
0
04 Apr 2024
Fairness in Large Language Models: A Taxonomic Survey
Fairness in Large Language Models: A Taxonomic Survey
Zhibo Chu
Sribala Vidyadhari Chinta
Wenbin Zhang
AILaw
261
89
0
31 Mar 2024
Argument Quality Assessment in the Age of Instruction-Following Large
  Language Models
Argument Quality Assessment in the Age of Instruction-Following Large Language Models
Henning Wachsmuth
Gabriella Lapesa
Elena Cabrio
Anne Lauscher
Joonsuk Park
Eva Maria Vecchi
S. Villata
Timon Ziegenbein
230
3
0
24 Mar 2024
Can AI Outperform Human Experts in Creating Social Media Creatives?
Can AI Outperform Human Experts in Creating Social Media Creatives?
Eunkyung Park
Raymond K. Wong
Junbum Kwon
210
1
0
19 Mar 2024
HateCOT: An Explanation-Enhanced Dataset for Generalizable Offensive
  Speech Detection via Large Language Models
HateCOT: An Explanation-Enhanced Dataset for Generalizable Offensive Speech Detection via Large Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2024
H. Nghiem
Hal Daumé
367
6
0
18 Mar 2024
On the Essence and Prospect: An Investigation of Alignment Approaches
  for Big Models
On the Essence and Prospect: An Investigation of Alignment Approaches for Big Models
Xinpeng Wang
Shitong Duan
Xiaoyuan Yi
Jing Yao
Shanlin Zhou
Zhihua Wei
Peng Zhang
Dongkuan Xu
Maosong Sun
Xing Xie
OffRL
390
22
0
07 Mar 2024
Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes
  in Emotion Attribution
Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution
Flor Miriam Plaza del Arco
Amanda Cercas Curry
Alba Curry
Gavin Abercrombie
Dirk Hovy
477
41
0
05 Mar 2024
Counterspeakers' Perspectives: Unveiling Barriers and AI Needs in the
  Fight against Online Hate
Counterspeakers' Perspectives: Unveiling Barriers and AI Needs in the Fight against Online Hate
Kumail Alhamoud
Cathy Buerger
Jenny T Liang
Joshua Garland
Maarten Sap
232
20
0
29 Feb 2024
Random Silicon Sampling: Simulating Human Sub-Population Opinion Using a
  Large Language Model Based on Group-Level Demographic Information
Random Silicon Sampling: Simulating Human Sub-Population Opinion Using a Large Language Model Based on Group-Level Demographic Information
Seungjong Sun
Eungu Lee
Dongyan Nan
Xiangying Zhao
Wonbyung Lee
Bernard J. Jansen
Jang Hyun Kim
279
32
0
28 Feb 2024
Shallow Synthesis of Knowledge in GPT-Generated Texts: A Case Study in
  Automatic Related Work Composition
Shallow Synthesis of Knowledge in GPT-Generated Texts: A Case Study in Automatic Related Work Composition
Anna Martin-Boyle
Aahan Tyagi
Marti A. Hearst
Dongyeop Kang
177
11
0
19 Feb 2024
Examining Gender and Racial Bias in Large Vision-Language Models Using a
  Novel Dataset of Parallel Images
Examining Gender and Racial Bias in Large Vision-Language Models Using a Novel Dataset of Parallel Images
Kathleen C. Fraser
S. Kiritchenko
262
64
0
08 Feb 2024
Measuring Implicit Bias in Explicitly Unbiased Large Language Models
Measuring Implicit Bias in Explicitly Unbiased Large Language Models
Xuechunzi Bai
Angelina Wang
Ilia Sucholutsky
Thomas Griffiths
333
48
0
06 Feb 2024
AnthroScore: A Computational Linguistic Measure of Anthropomorphism
AnthroScore: A Computational Linguistic Measure of Anthropomorphism
Myra Cheng
Kristina Gligorić
Tiziano Piccardi
Dan Jurafsky
180
32
0
03 Feb 2024
Beyond Behaviorist Representational Harms: A Plan for Measurement and
  Mitigation
Beyond Behaviorist Representational Harms: A Plan for Measurement and MitigationConference on Fairness, Accountability and Transparency (FAccT), 2024
Jennifer Chien
David Danks
298
30
0
25 Jan 2024
Canvil: Designerly Adaptation for LLM-Powered User Experiences
Canvil: Designerly Adaptation for LLM-Powered User ExperiencesInternational Conference on Human Factors in Computing Systems (CHI), 2024
K. J. Kevin Feng
Q. V. Liao
Ziang Xiao
Jennifer Wortman Vaughan
Amy X. Zhang
David W. McDonald
197
22
0
17 Jan 2024
Large Language Models Portray Socially Subordinate Groups as More
  Homogeneous, Consistent with a Bias Observed in Humans
Large Language Models Portray Socially Subordinate Groups as More Homogeneous, Consistent with a Bias Observed in HumansConference on Fairness, Accountability and Transparency (FAccT), 2024
Messi H.J. Lee
Jacob M. Montgomery
Calvin K. Lai
215
54
0
16 Jan 2024
"What's important here?": Opportunities and Challenges of Using LLMs in
  Retrieving Information from Web Interfaces
"What's important here?": Opportunities and Challenges of Using LLMs in Retrieving Information from Web Interfaces
Faria Huq
Jeffrey P. Bigham
Nikolas Martelaro
236
8
0
11 Dec 2023
Fair Text Classification with Wasserstein Independence
Fair Text Classification with Wasserstein IndependenceConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Thibaud Leteno
Antoine Gourru
Charlotte Laclau
Rémi Emonet
Christophe Gravier
FaML
251
5
0
21 Nov 2023
P^3SUM: Preserving Author's Perspective in News Summarization with
  Diffusion Language Models
P^3SUM: Preserving Author's Perspective in News Summarization with Diffusion Language Models
Yuhan Liu
Shangbin Feng
Xiaochuang Han
Vidhisha Balachandran
Chan Young Park
Sachin Kumar
Yulia Tsvetkov
DiffM
258
7
0
16 Nov 2023
You don't need a personality test to know these models are unreliable:
  Assessing the Reliability of Large Language Models on Psychometric
  Instruments
You don't need a personality test to know these models are unreliable: Assessing the Reliability of Large Language Models on Psychometric Instruments
Bangzhao Shu
Lechen Zhang
Minje Choi
Lavinia Dunagan
Lajanugen Logeswaran
Moontae Lee
Dallas Card
David Jurgens
282
62
0
16 Nov 2023
Simulating Opinion Dynamics with Networks of LLM-based Agents
Simulating Opinion Dynamics with Networks of LLM-based Agents
Yun-Shiuan Chuang
Agam Goyal
Sameer Narendran
Siddharth Suresh
Robert Hawkins
Sijia Yang
Dhavan Shah
Junjie Hu
Timothy T. Rogers
AI4CE
487
127
0
16 Nov 2023
A Material Lens on Coloniality in NLP
A Material Lens on Coloniality in NLP
William B. Held
Camille Harris
Michael Best
Diyi Yang
322
21
0
14 Nov 2023
Intentional Biases in LLM Responses
Intentional Biases in LLM ResponsesUbiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), 2023
Nicklaus Badyal
Derek Jacoby
Yvonne Coady
123
7
0
11 Nov 2023
ChiMed-GPT: A Chinese Medical Large Language Model with Full Training
  Regime and Better Alignment to Human Preferences
ChiMed-GPT: A Chinese Medical Large Language Model with Full Training Regime and Better Alignment to Human PreferencesAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Yuanhe Tian
Ruyi Gan
Yan Song
Jiaxing Zhang
Yongdong Zhang
AI4MHAI4CELM&MA
465
69
0
10 Nov 2023
Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs
Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs
Shashank Gupta
Vaishnavi Shrivastava
Ameet Deshpande
Ashwin Kalyan
Peter Clark
Ashish Sabharwal
Tushar Khot
465
170
0
08 Nov 2023
Personas as a Way to Model Truthfulness in Language Models
Personas as a Way to Model Truthfulness in Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Nitish Joshi
Javier Rando
Abulhair Saparov
Najoung Kim
He He
HILM
394
40
0
27 Oct 2023
SOTOPIA: Interactive Evaluation for Social Intelligence in Language
  Agents
SOTOPIA: Interactive Evaluation for Social Intelligence in Language AgentsInternational Conference on Learning Representations (ICLR), 2023
Xuhui Zhou
Hao Zhu
Leena Mathur
Ruohong Zhang
Haofei Yu
...
Louis-Philippe Morency
Yonatan Bisk
Daniel Fried
Graham Neubig
Maarten Sap
LLMAG
381
222
0
18 Oct 2023
CoMPosT: Characterizing and Evaluating Caricature in LLM Simulations
CoMPosT: Characterizing and Evaluating Caricature in LLM SimulationsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Myra Cheng
Tiziano Piccardi
Diyi Yang
LLMAG
329
106
0
17 Oct 2023
Rehearsal: Simulating Conflict to Teach Conflict Resolution
Rehearsal: Simulating Conflict to Teach Conflict ResolutionInternational Conference on Human Factors in Computing Systems (CHI), 2023
Omar Shaikh
Valentino Chai
Michele J. Gelfand
Diyi Yang
Michael S. Bernstein
199
87
0
21 Sep 2023
Sensitivity, Performance, Robustness: Deconstructing the Effect of
  Sociodemographic Prompting
Sensitivity, Performance, Robustness: Deconstructing the Effect of Sociodemographic PromptingConference of the European Chapter of the Association for Computational Linguistics (EACL), 2023
Tilman Beck
Hendrik Schuff
Anne Lauscher
Iryna Gurevych
321
65
0
13 Sep 2023
FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes
  and Biases in Large Language Models
FairMonitor: A Four-Stage Automatic Framework for Detecting Stereotypes and Biases in Large Language Models
Yanhong Bai
Jiabao Zhao
Jinxin Shi
Tingjiang Wei
Xingjiao Wu
Liangbo He
132
1
0
21 Aug 2023
A Survey on Fairness in Large Language Models
A Survey on Fairness in Large Language Models
Yingji Li
Mengnan Du
Rui Song
Xin Wang
Ying Wang
ALM
383
98
0
20 Aug 2023
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using
  EmotionBench
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Shu Yang
Man Ho Lam
E. Li
Shujie Ren
Wenxuan Wang
Wenxiang Jiao
Zhaopeng Tu
Michael R. Lyu
362
66
0
07 Aug 2023
How User Language Affects Conflict Fatality Estimates in ChatGPT
How User Language Affects Conflict Fatality Estimates in ChatGPT
Daniel Kazenwadel
C. Steinert
77
2
0
26 Jul 2023
Unveiling Gender Bias in Terms of Profession Across LLMs: Analyzing and
  Addressing Sociological Implications
Unveiling Gender Bias in Terms of Profession Across LLMs: Analyzing and Addressing Sociological Implications
Vishesh Thakur
227
41
0
18 Jul 2023
Evaluating Biased Attitude Associations of Language Models in an
  Intersectional Context
Evaluating Biased Attitude Associations of Language Models in an Intersectional ContextAAAI/ACM Conference on AI, Ethics, and Society (AIES), 2023
Shiva Omrani Sabbaghi
Robert Wolfe
Aylin Caliskan
201
29
0
07 Jul 2023
Towards Measuring the Representation of Subjective Global Opinions in
  Language Models
Towards Measuring the Representation of Subjective Global Opinions in Language Models
Esin Durmus
Karina Nyugen
Thomas I. Liao
Nicholas Schiefer
Amanda Askell
...
Alex Tamkin
Janel Thamkul
Jared Kaplan
Jack Clark
Deep Ganguli
353
335
0
28 Jun 2023
Opportunities and Risks of LLMs for Scalable Deliberation with Polis
Opportunities and Risks of LLMs for Scalable Deliberation with Polis
Christopher T. Small
Ivan Vendrov
Esin Durmus
Hadjar Homaei
Elizabeth Barry
Julien Cornebise
Ted Suzman
Deep Ganguli
Colin Megill
195
52
0
20 Jun 2023
This Land is {Your, My} Land: Evaluating Geopolitical Biases in Language
  Models
This Land is {Your, My} Land: Evaluating Geopolitical Biases in Language ModelsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2023
Bryan Li
Samar Haider
Chris Callison-Burch
472
27
0
24 May 2023
ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores
  Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource
  Languages
ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource LanguagesInternational Computing Education Research Workshop (ICER), 2023
Sourojit Ghosh
Aylin Caliskan
212
103
0
17 May 2023
Coarse race data conceals disparities in clinical risk score performance
Coarse race data conceals disparities in clinical risk score performanceMachine Learning in Health Care (MLHC), 2023
Rajiv Movva
Divya Shanmugam
Kaihua Hou
P. Pathak
John Guttag
Nikhil Garg
Emma Pierson
194
33
0
18 Apr 2023
BiasTestGPT: Using ChatGPT for Social Bias Testing of Language Models
BiasTestGPT: Using ChatGPT for Social Bias Testing of Language Models
Rafal Kocielnik
Shrimai Prabhumoye
Vivian Zhang
Roy Jiang
R. Alvarez
Anima Anandkumar
306
11
0
14 Feb 2023
Generalized Word Shift Graphs: A Method for Visualizing and Explaining
  Pairwise Comparisons Between Texts
Generalized Word Shift Graphs: A Method for Visualizing and Explaining Pairwise Comparisons Between Texts
Ryan J. Gallagher
M. Frank
Lewis Mitchell
A. Schwartz
A. J. Reagan
C. Danforth
P. Dodds
218
76
0
05 Aug 2020
Previous
123