ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.00133
  4. Cited By
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked
  Language Models

CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models

30 September 2020
Nikita Nangia
Clara Vania
Rasika Bhalerao
Samuel R. Bowman
ArXivPDFHTML

Papers citing "CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models"

50 / 130 papers shown
Title
A Survey on Fairness in Large Language Models
A Survey on Fairness in Large Language Models
Yingji Li
Mengnan Du
Rui Song
Xin Wang
Ying Wang
ALM
52
59
0
20 Aug 2023
Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A
  Two-Stage Approach to Mitigate Social Biases
Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A Two-Stage Approach to Mitigate Social Biases
Yingji Li
Mengnan Du
Xin Wang
Ying Wang
53
26
0
04 Jul 2023
An Empirical Analysis of Parameter-Efficient Methods for Debiasing
  Pre-Trained Language Models
An Empirical Analysis of Parameter-Efficient Methods for Debiasing Pre-Trained Language Models
Zhongbin Xie
Thomas Lukasiewicz
26
12
0
06 Jun 2023
Uncovering and Quantifying Social Biases in Code Generation
Uncovering and Quantifying Social Biases in Code Generation
Yong-Jin Liu
Xiaokang Chen
Yan Gao
Zhe Su
Fengji Zhang
Daoguang Zan
Jian-Guang Lou
Pin-Yu Chen
Tsung-Yi Ho
36
19
0
24 May 2023
An Efficient Multilingual Language Model Compression through Vocabulary
  Trimming
An Efficient Multilingual Language Model Compression through Vocabulary Trimming
Asahi Ushio
Yi Zhou
Jose Camacho-Collados
41
7
0
24 May 2023
Having Beer after Prayer? Measuring Cultural Bias in Large Language
  Models
Having Beer after Prayer? Measuring Cultural Bias in Large Language Models
Tarek Naous
Michael Joseph Ryan
Alan Ritter
Wei-ping Xu
37
85
0
23 May 2023
Language-Agnostic Bias Detection in Language Models with Bias Probing
Language-Agnostic Bias Detection in Language Models with Bias Probing
Abdullatif Köksal
Omer F. Yalcin
Ahmet Akbiyik
M. Kilavuz
Anna Korhonen
Hinrich Schütze
33
1
0
22 May 2023
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language
  Models
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language Models
Seraphina Goldfarb-Tarrant
Eddie L. Ungless
Esma Balkir
Su Lin Blodgett
34
9
0
22 May 2023
Comparing Biases and the Impact of Multilingual Training across Multiple
  Languages
Comparing Biases and the Impact of Multilingual Training across Multiple Languages
Sharon Levy
Neha Ann John
Ling Liu
Yogarshi Vyas
Jie Ma
Yoshinari Fujinuma
Miguel Ballesteros
Vittorio Castelli
Dan Roth
26
25
0
18 May 2023
A Better Way to Do Masked Language Model Scoring
A Better Way to Do Masked Language Model Scoring
Carina Kauf
Anna A. Ivanova
42
22
0
17 May 2023
On the Origins of Bias in NLP through the Lens of the Jim Code
On the Origins of Bias in NLP through the Lens of the Jim Code
Fatma Elsafoury
Gavin Abercrombie
41
4
0
16 May 2023
Constructing Holistic Measures for Social Biases in Masked Language Models
Yang Liu
Yuexian Hou
18
0
0
12 May 2023
Evaluation of Social Biases in Recent Large Pre-Trained Models
Evaluation of Social Biases in Recent Large Pre-Trained Models
Swapnil Sharma
Nikita Anand
V. KranthiKiranG.
Alind Jain
24
0
0
13 Apr 2023
Assessing Language Model Deployment with Risk Cards
Assessing Language Model Deployment with Risk Cards
Leon Derczynski
Hannah Rose Kirk
Vidhisha Balachandran
Sachin Kumar
Yulia Tsvetkov
M. Leiser
Saif Mohammad
28
42
0
31 Mar 2023
Logic Against Bias: Textual Entailment Mitigates Stereotypical Sentence
  Reasoning
Logic Against Bias: Textual Entailment Mitigates Stereotypical Sentence Reasoning
Hongyin Luo
James R. Glass
NAI
21
7
0
10 Mar 2023
LaSER: Language-Specific Event Recommendation
LaSER: Language-Specific Event Recommendation
Sara Abdollahi
Simon Gottschalk
Elena Demidova
19
4
0
24 Feb 2023
In-Depth Look at Word Filling Societal Bias Measures
In-Depth Look at Word Filling Societal Bias Measures
Matúš Pikuliak
Ivana Benová
Viktor Bachratý
21
9
0
24 Feb 2023
Auditing large language models: a three-layered approach
Auditing large language models: a three-layered approach
Jakob Mokander
Jonas Schuett
Hannah Rose Kirk
Luciano Floridi
AILaw
MLAU
42
194
0
16 Feb 2023
Learning to Initialize: Can Meta Learning Improve Cross-task
  Generalization in Prompt Tuning?
Learning to Initialize: Can Meta Learning Improve Cross-task Generalization in Prompt Tuning?
Chengwei Qin
Q. Li
Ruochen Zhao
Shafiq R. Joty
VLM
LRM
23
15
0
16 Feb 2023
The Capacity for Moral Self-Correction in Large Language Models
The Capacity for Moral Self-Correction in Large Language Models
Deep Ganguli
Amanda Askell
Nicholas Schiefer
Thomas I. Liao
Kamil.e Lukovsiut.e
...
Tom B. Brown
C. Olah
Jack Clark
Sam Bowman
Jared Kaplan
LRM
ReLM
45
158
0
15 Feb 2023
Counter-GAP: Counterfactual Bias Evaluation through Gendered Ambiguous
  Pronouns
Counter-GAP: Counterfactual Bias Evaluation through Gendered Ambiguous Pronouns
Zhongbin Xie
Vid Kocijan
Thomas Lukasiewicz
Oana-Maria Camburu
8
2
0
11 Feb 2023
The Gradient of Generative AI Release: Methods and Considerations
The Gradient of Generative AI Release: Methods and Considerations
Irene Solaiman
24
98
0
05 Feb 2023
FineDeb: A Debiasing Framework for Language Models
FineDeb: A Debiasing Framework for Language Models
Akash Saravanan
Dhruv Mullick
Habibur Rahman
Nidhi Hegde
FedML
AI4CE
18
4
0
05 Feb 2023
Bipol: Multi-axes Evaluation of Bias with Explainability in Benchmark
  Datasets
Bipol: Multi-axes Evaluation of Bias with Explainability in Benchmark Datasets
Tosin P. Adewumi
Isabella Sodergren
Lama Alkhaled
Sana Sabah Sabry
F. Liwicki
Marcus Liwicki
35
4
0
28 Jan 2023
BinaryVQA: A Versatile Test Set to Evaluate the Out-of-Distribution
  Generalization of VQA Models
BinaryVQA: A Versatile Test Set to Evaluate the Out-of-Distribution Generalization of VQA Models
Ali Borji
CoGe
10
1
0
28 Jan 2023
Trustworthy Social Bias Measurement
Trustworthy Social Bias Measurement
Rishi Bommasani
Percy Liang
27
10
0
20 Dec 2022
Language model acceptability judgements are not always robust to context
Language model acceptability judgements are not always robust to context
Koustuv Sinha
Jon Gauthier
Aaron Mueller
Kanishka Misra
Keren Fuentes
R. Levy
Adina Williams
21
17
0
18 Dec 2022
Manifestations of Xenophobia in AI Systems
Manifestations of Xenophobia in AI Systems
Nenad Tomašev
J. L. Maynard
Iason Gabriel
24
9
0
15 Dec 2022
Mind Your Bias: A Critical Review of Bias Detection Methods for
  Contextual Language Models
Mind Your Bias: A Critical Review of Bias Detection Methods for Contextual Language Models
Silke Husse
Andreas Spitz
20
6
0
15 Nov 2022
Casual Conversations v2: Designing a large consent-driven dataset to
  measure algorithmic bias and robustness
Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
C. Hazirbas
Yejin Bang
Tiezheng Yu
Parisa Assar
Bilal Porgali
...
Jacqueline Pan
Emily McReynolds
Miranda Bogen
Pascale Fung
Cristian Canton Ferrer
27
8
0
10 Nov 2022
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
BigScience Workshop
:
Teven Le Scao
Angela Fan
Christopher Akiki
...
Zhongli Xie
Zifan Ye
M. Bras
Younes Belkada
Thomas Wolf
VLM
116
2,310
0
09 Nov 2022
Detecting Unintended Social Bias in Toxic Language Datasets
Detecting Unintended Social Bias in Toxic Language Datasets
Nihar Ranjan Sahoo
Himanshu Gupta
P. Bhattacharyya
13
18
0
21 Oct 2022
Choose Your Lenses: Flaws in Gender Bias Evaluation
Choose Your Lenses: Flaws in Gender Bias Evaluation
Hadas Orgad
Yonatan Belinkov
27
35
0
20 Oct 2022
The Tail Wagging the Dog: Dataset Construction Biases of Social Bias
  Benchmarks
The Tail Wagging the Dog: Dataset Construction Biases of Social Bias Benchmarks
Nikil Selvam
Sunipa Dev
Daniel Khashabi
Tushar Khot
Kai-Wei Chang
ALM
24
25
0
18 Oct 2022
BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for
  Text Generation
BERTScore is Unfair: On Social Bias in Language Model-Based Metrics for Text Generation
Tianxiang Sun
Junliang He
Xipeng Qiu
Xuanjing Huang
24
44
0
14 Oct 2022
SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense
  Reasoning Models
SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models
Haozhe An
Zongxia Li
Jieyu Zhao
Rachel Rudinger
22
25
0
13 Oct 2022
Debiasing isn't enough! -- On the Effectiveness of Debiasing MLMs and
  their Social Biases in Downstream Tasks
Debiasing isn't enough! -- On the Effectiveness of Debiasing MLMs and their Social Biases in Downstream Tasks
Masahiro Kaneko
Danushka Bollegala
Naoaki Okazaki
23
41
0
06 Oct 2022
GLM-130B: An Open Bilingual Pre-trained Model
GLM-130B: An Open Bilingual Pre-trained Model
Aohan Zeng
Xiao Liu
Zhengxiao Du
Zihan Wang
Hanyu Lai
...
Jidong Zhai
Wenguang Chen
Peng-Zhen Zhang
Yuxiao Dong
Jie Tang
BDL
LRM
250
1,073
0
05 Oct 2022
Re-contextualizing Fairness in NLP: The Case of India
Re-contextualizing Fairness in NLP: The Case of India
Shaily Bhatt
Sunipa Dev
Partha P. Talukdar
Shachi Dave
Vinodkumar Prabhakaran
9
54
0
25 Sep 2022
Few-shot Adaptation Works with UnpredicTable Data
Few-shot Adaptation Works with UnpredicTable Data
Jun Shern Chan
Michael Pieler
Jonathan Jao
Jérémy Scheurer
Ethan Perez
31
5
0
01 Aug 2022
Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large
  Language Models
Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large Language Models
Virginia K. Felkner
Ho-Chun Herbert Chang
Eugene Jang
Jonathan May
OSLM
21
8
0
23 Jun 2022
Fewer Errors, but More Stereotypes? The Effect of Model Size on Gender
  Bias
Fewer Errors, but More Stereotypes? The Effect of Model Size on Gender Bias
Yarden Tal
Inbal Magar
Roy Schwartz
17
33
0
20 Jun 2022
Characteristics of Harmful Text: Towards Rigorous Benchmarking of
  Language Models
Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
Maribeth Rauh
John F. J. Mellor
J. Uesato
Po-Sen Huang
Johannes Welbl
...
Amelia Glaese
G. Irving
Iason Gabriel
William S. Isaac
Lisa Anne Hendricks
25
49
0
16 Jun 2022
What Changed? Investigating Debiasing Methods using Causal Mediation
  Analysis
What Changed? Investigating Debiasing Methods using Causal Mediation Analysis
Su-Ha Jeoung
Jana Diesner
CML
19
7
0
01 Jun 2022
Eliciting and Understanding Cross-Task Skills with Task-Level
  Mixture-of-Experts
Eliciting and Understanding Cross-Task Skills with Task-Level Mixture-of-Experts
Qinyuan Ye
Juan Zha
Xiang Ren
MoE
18
12
0
25 May 2022
"I'm sorry to hear that": Finding New Biases in Language Models with a
  Holistic Descriptor Dataset
"I'm sorry to hear that": Finding New Biases in Language Models with a Holistic Descriptor Dataset
Eric Michael Smith
Melissa Hall
Melanie Kambadur
Eleonora Presani
Adina Williams
76
129
0
18 May 2022
OPT: Open Pre-trained Transformer Language Models
OPT: Open Pre-trained Transformer Language Models
Susan Zhang
Stephen Roller
Naman Goyal
Mikel Artetxe
Moya Chen
...
Daniel Simig
Punit Singh Koura
Anjali Sridhar
Tianlu Wang
Luke Zettlemoyer
VLM
OSLM
AI4CE
59
3,488
0
02 May 2022
Detoxifying Language Models with a Toxic Corpus
Detoxifying Language Models with a Toxic Corpus
Yoon A Park
Frank Rudzicz
24
6
0
30 Apr 2022
Identifying and Measuring Token-Level Sentiment Bias in Pre-trained
  Language Models with Prompts
Identifying and Measuring Token-Level Sentiment Bias in Pre-trained Language Models with Prompts
Apoorv Garg
Deval Srivastava
Zhiyang Xu
Lifu Huang
11
5
0
15 Apr 2022
Fair and Argumentative Language Modeling for Computational Argumentation
Fair and Argumentative Language Modeling for Computational Argumentation
Carolin Holtermann
Anne Lauscher
Simone Paolo Ponzetto
11
21
0
08 Apr 2022
Previous
123
Next