ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.09456
  4. Cited By
StereoSet: Measuring stereotypical bias in pretrained language models

StereoSet: Measuring stereotypical bias in pretrained language models

20 April 2020
Moin Nadeem
Anna Bethke
Siva Reddy
ArXivPDFHTML

Papers citing "StereoSet: Measuring stereotypical bias in pretrained language models"

26 / 126 papers shown
Title
Sense Embeddings are also Biased--Evaluating Social Biases in Static and
  Contextualised Sense Embeddings
Sense Embeddings are also Biased--Evaluating Social Biases in Static and Contextualised Sense Embeddings
Yi Zhou
Masahiro Kaneko
Danushka Bollegala
8
23
0
14 Mar 2022
Speciesist Language and Nonhuman Animal Bias in English Masked Language
  Models
Speciesist Language and Nonhuman Animal Bias in English Masked Language Models
Masashi Takeshita
Rafal Rzepka
K. Araki
24
6
0
10 Mar 2022
Do Transformers know symbolic rules, and would we know if they did?
Do Transformers know symbolic rules, and would we know if they did?
Tommi Gröndahl
Yu-Wen Guo
Nirmal Asokan
17
0
0
19 Feb 2022
Accountability in an Algorithmic Society: Relationality, Responsibility,
  and Robustness in Machine Learning
Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning
A. Feder Cooper
Emanuel Moss
Benjamin Laufer
Helen Nissenbaum
MLAU
24
85
0
10 Feb 2022
Can Model Compression Improve NLP Fairness
Can Model Compression Improve NLP Fairness
Guangxuan Xu
Qingyuan Hu
23
26
0
21 Jan 2022
Does Entity Abstraction Help Generative Transformers Reason?
Does Entity Abstraction Help Generative Transformers Reason?
Nicolas Angelard-Gontier
Siva Reddy
C. Pal
19
5
0
05 Jan 2022
A Survey on Gender Bias in Natural Language Processing
A Survey on Gender Bias in Natural Language Processing
Karolina Stañczak
Isabelle Augenstein
26
109
0
28 Dec 2021
Efficient Large Scale Language Modeling with Mixtures of Experts
Efficient Large Scale Language Modeling with Mixtures of Experts
Mikel Artetxe
Shruti Bhosale
Naman Goyal
Todor Mihaylov
Myle Ott
...
Jeff Wang
Luke Zettlemoyer
Mona T. Diab
Zornitsa Kozareva
Ves Stoyanov
MoE
50
188
0
20 Dec 2021
Few-shot Learning with Multilingual Language Models
Few-shot Learning with Multilingual Language Models
Xi Victoria Lin
Todor Mihaylov
Mikel Artetxe
Tianlu Wang
Shuohui Chen
...
Luke Zettlemoyer
Zornitsa Kozareva
Mona T. Diab
Ves Stoyanov
Xian Li
BDL
ELM
LRM
61
284
0
20 Dec 2021
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in
  Pretrained Language Models
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Pieter Delobelle
E. Tokpo
T. Calders
Bettina Berendt
19
25
0
14 Dec 2021
Simple Entity-Centric Questions Challenge Dense Retrievers
Simple Entity-Centric Questions Challenge Dense Retrievers
Christopher Sciavolino
Zexuan Zhong
Jinhyuk Lee
Danqi Chen
RALM
11
159
0
17 Sep 2021
Improving Counterfactual Generation for Fair Hate Speech Detection
Improving Counterfactual Generation for Fair Hate Speech Detection
Aida Mostafazadeh Davani
Ali Omrani
Brendan Kennedy
M. Atari
Xiang Ren
Morteza Dehghani
22
9
0
03 Aug 2021
The MultiBERTs: BERT Reproductions for Robustness Analysis
The MultiBERTs: BERT Reproductions for Robustness Analysis
Thibault Sellam
Steve Yadlowsky
Jason W. Wei
Naomi Saphra
Alexander DÁmour
...
Iulia Turc
Jacob Eisenstein
Dipanjan Das
Ian Tenney
Ellie Pavlick
22
93
0
30 Jun 2021
Quantifying Social Biases in NLP: A Generalization and Empirical
  Comparison of Extrinsic Fairness Metrics
Quantifying Social Biases in NLP: A Generalization and Empirical Comparison of Extrinsic Fairness Metrics
Paula Czarnowska
Yogarshi Vyas
Kashif Shah
15
104
0
28 Jun 2021
A Survey of Race, Racism, and Anti-Racism in NLP
A Survey of Race, Racism, and Anti-Racism in NLP
Anjalie Field
Su Lin Blodgett
Zeerak Talat
Yulia Tsvetkov
21
122
0
21 Jun 2021
Understanding and Countering Stereotypes: A Computational Approach to
  the Stereotype Content Model
Understanding and Countering Stereotypes: A Computational Approach to the Stereotype Content Model
Kathleen C. Fraser
I. Nejadgholi
S. Kiritchenko
11
37
0
04 Jun 2021
How Reliable are Model Diagnostics?
How Reliable are Model Diagnostics?
V. Aribandi
Yi Tay
Donald Metzler
16
19
0
12 May 2021
Causal Attention for Vision-Language Tasks
Causal Attention for Vision-Language Tasks
Xu Yang
Hanwang Zhang
Guojun Qi
Jianfei Cai
CML
23
148
0
05 Mar 2021
Bias Out-of-the-Box: An Empirical Analysis of Intersectional
  Occupational Biases in Popular Generative Language Models
Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models
Hannah Rose Kirk
Yennie Jun
Haider Iqbal
Elias Benussi
Filippo Volpin
F. Dreyer
Aleksandar Shtedritski
Yuki M. Asano
6
180
0
08 Feb 2021
Debiasing Pre-trained Contextualised Embeddings
Debiasing Pre-trained Contextualised Embeddings
Masahiro Kaneko
Danushka Bollegala
210
138
0
23 Jan 2021
Underspecification Presents Challenges for Credibility in Modern Machine
  Learning
Underspecification Presents Challenges for Credibility in Modern Machine Learning
Alexander DÁmour
Katherine A. Heller
D. Moldovan
Ben Adlam
B. Alipanahi
...
Kellie Webster
Steve Yadlowsky
T. Yun
Xiaohua Zhai
D. Sculley
OffRL
27
669
0
06 Nov 2020
Critical Thinking for Language Models
Critical Thinking for Language Models
Gregor Betz
Christian Voigt
Kyle Richardson
SyDa
ReLM
LRM
AI4CE
18
35
0
15 Sep 2020
On Robustness and Bias Analysis of BERT-based Relation Extraction
On Robustness and Bias Analysis of BERT-based Relation Extraction
Luoqiu Li
Xiang Chen
Hongbin Ye
Zhen Bi
Shumin Deng
Ningyu Zhang
Huajun Chen
24
18
0
14 Sep 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
15
39,958
0
28 May 2020
DQI: Measuring Data Quality in NLP
DQI: Measuring Data Quality in NLP
Swaroop Mishra
Anjana Arunkumar
Bhavdeep Singh Sachdeva
Chris Bryan
Chitta Baral
25
30
0
02 May 2020
The Woman Worked as a Babysitter: On Biases in Language Generation
The Woman Worked as a Babysitter: On Biases in Language Generation
Emily Sheng
Kai-Wei Chang
Premkumar Natarajan
Nanyun Peng
206
616
0
03 Sep 2019
Previous
123