ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.07269
  4. Cited By
SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense
  Reasoning Models

SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models

13 October 2022
Haozhe An
Zongxia Li
Jieyu Zhao
Rachel Rudinger
ArXivPDFHTML

Papers citing "SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models"

25 / 25 papers shown
Title
Say It Another Way: A Framework for User-Grounded Paraphrasing
Say It Another Way: A Framework for User-Grounded Paraphrasing
Cléa Chataigner
Rebecca Ma
Prakhar Ganesh
Afaf Taik
Elliot Creager
G. Farnadi
37
0
0
06 May 2025
Developing A Framework to Support Human Evaluation of Bias in Generated Free Response Text
Developing A Framework to Support Human Evaluation of Bias in Generated Free Response Text
Jennifer Healey
Laurie Byrum
Md Nadeem Akhtar
Surabhi Bhargava
Moumita Sinha
29
0
0
05 May 2025
Large Language Models Are Effective Human Annotation Assistants, But Not Good Independent Annotators
Large Language Models Are Effective Human Annotation Assistants, But Not Good Independent Annotators
Feng Gu
Zongxia Li
Carlos Rafael Colon
Benjamin Evans
Ishani Mondal
Jordan Boyd-Graber
46
1
0
09 Mar 2025
On the Mutual Influence of Gender and Occupation in LLM Representations
Haozhe An
Connor Baumler
Abhilasha Sancheti
Rachel Rudinger
AI4CE
53
0
0
09 Mar 2025
With a Grain of SALT: Are LLMs Fair Across Social Dimensions?
With a Grain of SALT: Are LLMs Fair Across Social Dimensions?
Samee Arif
Zohaib Khan
Agha Ali Raza
Awais Athar
22
0
0
16 Oct 2024
The Mystery of Compositional Generalization in Graph-based Generative
  Commonsense Reasoning
The Mystery of Compositional Generalization in Graph-based Generative Commonsense Reasoning
Xiyan Fu
Anette Frank
LRM
28
0
0
08 Oct 2024
On the Influence of Gender and Race in Romantic Relationship Prediction
  from Large Language Models
On the Influence of Gender and Race in Romantic Relationship Prediction from Large Language Models
Abhilasha Sancheti
Haozhe An
Rachel Rudinger
34
0
0
05 Oct 2024
Anti-stereotypical Predictive Text Suggestions Do Not Reliably Yield
  Anti-stereotypical Writing
Anti-stereotypical Predictive Text Suggestions Do Not Reliably Yield Anti-stereotypical Writing
Connor Baumler
Hal Daumé III
21
0
0
30 Sep 2024
Fairness Definitions in Language Models Explained
Fairness Definitions in Language Models Explained
Thang Viet Doan
Zhibo Chu
Zichong Wang
Wenbin Zhang
ALM
50
10
0
26 Jul 2024
CLIMB: A Benchmark of Clinical Bias in Large Language Models
CLIMB: A Benchmark of Clinical Bias in Large Language Models
Yubo Zhang
Shudi Hou
Mingyu Derek Ma
Wei Wang
Muhao Chen
Jieyu Zhao
ELM
35
2
0
07 Jul 2024
Social Bias Evaluation for Large Language Models Requires Prompt
  Variations
Social Bias Evaluation for Large Language Models Requires Prompt Variations
Rem Hida
Masahiro Kaneko
Naoaki Okazaki
38
14
0
03 Jul 2024
Cultural Conditioning or Placebo? On the Effectiveness of
  Socio-Demographic Prompting
Cultural Conditioning or Placebo? On the Effectiveness of Socio-Demographic Prompting
Sagnik Mukherjee
Muhammad Farid Adilazuarda
Sunayana Sitaram
Kalika Bali
Alham Fikri Aji
Monojit Choudhury
33
5
0
17 Jun 2024
Do Large Language Models Discriminate in Hiring Decisions on the Basis
  of Race, Ethnicity, and Gender?
Do Large Language Models Discriminate in Hiring Decisions on the Basis of Race, Ethnicity, and Gender?
Haozhe An
Christabel Acquaye
Colin Wang
Zongxia Li
Rachel Rudinger
36
12
0
15 Jun 2024
mCSQA: Multilingual Commonsense Reasoning Dataset with Unified Creation
  Strategy by Language Models and Humans
mCSQA: Multilingual Commonsense Reasoning Dataset with Unified Creation Strategy by Language Models and Humans
Yusuke Sakai
Hidetaka Kamigaito
Taro Watanabe
LRM
38
2
0
06 Jun 2024
Culturally Aware and Adapted NLP: A Taxonomy and a Survey of the State of the Art
Culturally Aware and Adapted NLP: A Taxonomy and a Survey of the State of the Art
Chen Cecilia Liu
Iryna Gurevych
Anna Korhonen
33
5
0
06 Jun 2024
Stop! In the Name of Flaws: Disentangling Personal Names and
  Sociodemographic Attributes in NLP
Stop! In the Name of Flaws: Disentangling Personal Names and Sociodemographic Attributes in NLP
Vagrant Gautam
Arjun Subramonian
Anne Lauscher
O. Keyes
32
6
0
27 May 2024
Fairness in Large Language Models: A Taxonomic Survey
Fairness in Large Language Models: A Taxonomic Survey
Zhibo Chu
Zichong Wang
Wenbin Zhang
AILaw
43
32
0
31 Mar 2024
Towards Measuring and Modeling "Culture" in LLMs: A Survey
Towards Measuring and Modeling "Culture" in LLMs: A Survey
Muhammad Farid Adilazuarda
Sagnik Mukherjee
Pradhyumna Lavania
Siddhant Singh
Alham Fikri Aji
Jacki OÑeill
Ashutosh Modi
Monojit Choudhury
52
53
0
05 Mar 2024
CALM : A Multi-task Benchmark for Comprehensive Assessment of Language
  Model Bias
CALM : A Multi-task Benchmark for Comprehensive Assessment of Language Model Bias
Vipul Gupta
Pranav Narayanan Venkit
Hugo Laurenccon
Shomir Wilson
R. Passonneau
36
12
0
24 Aug 2023
Sociodemographic Bias in Language Models: A Survey and Forward Path
Sociodemographic Bias in Language Models: A Survey and Forward Path
Vipul Gupta
Pranav Narayanan Venkit
Shomir Wilson
R. Passonneau
42
20
0
13 Jun 2023
Marked Personas: Using Natural Language Prompts to Measure Stereotypes
  in Language Models
Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models
Myra Cheng
Esin Durmus
Dan Jurafsky
25
174
0
29 May 2023
Nichelle and Nancy: The Influence of Demographic Attributes and
  Tokenization Length on First Name Biases
Nichelle and Nancy: The Influence of Demographic Attributes and Tokenization Length on First Name Biases
Haozhe An
Rachel Rudinger
16
9
0
26 May 2023
Having Beer after Prayer? Measuring Cultural Bias in Large Language
  Models
Having Beer after Prayer? Measuring Cultural Bias in Large Language Models
Tarek Naous
Michael Joseph Ryan
Alan Ritter
Wei-ping Xu
24
85
0
23 May 2023
BBQ: A Hand-Built Bias Benchmark for Question Answering
BBQ: A Hand-Built Bias Benchmark for Question Answering
Alicia Parrish
Angelica Chen
Nikita Nangia
Vishakh Padmakumar
Jason Phang
Jana Thompson
Phu Mon Htut
Sam Bowman
212
367
0
15 Oct 2021
Low Frequency Names Exhibit Bias and Overfitting in Contextualizing
  Language Models
Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models
Robert Wolfe
Aylin Caliskan
85
51
0
01 Oct 2021
1