Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2010.14534
Cited By
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias
27 October 2020
Marion Bartl
Malvina Nissim
Albert Gatt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias"
17 / 17 papers shown
Title
Developing A Framework to Support Human Evaluation of Bias in Generated Free Response Text
Jennifer Healey
Laurie Byrum
Md Nadeem Akhtar
Surabhi Bhargava
Moumita Sinha
29
0
0
05 May 2025
Collapsed Language Models Promote Fairness
Jingxuan Xu
Wuyang Chen
Linyi Li
Yao Zhao
Yunchao Wei
39
0
0
06 Oct 2024
Multilingual Text-to-Image Generation Magnifies Gender Stereotypes and Prompt Engineering May Not Help You
Felix Friedrich
Katharina Hämmerl
P. Schramowski
Manuel Brack
Jindrich Libovický
Kristian Kersting
Alexander M. Fraser
EGVM
22
10
0
29 Jan 2024
Multilingual large language models leak human stereotypes across language boundaries
Yang Trista Cao
Anna Sotnikova
Jieyu Zhao
Linda X. Zou
Rachel Rudinger
Hal Daumé
PILM
18
10
0
12 Dec 2023
Logic Against Bias: Textual Entailment Mitigates Stereotypical Sentence Reasoning
Hongyin Luo
James R. Glass
NAI
21
7
0
10 Mar 2023
SensePOLAR: Word sense aware interpretability for pre-trained contextual word embeddings
Jan Engler
Sandipan Sikdar
Marlene Lutz
M. Strohmaier
24
7
0
11 Jan 2023
Can Current Task-oriented Dialogue Models Automate Real-world Scenarios in the Wild?
Sang-Woo Lee
Sungdong Kim
Donghyeon Ko
Dong-hyun Ham
Youngki Hong
...
Wangkyo Jung
Kyunghyun Cho
Donghyun Kwak
H. Noh
W. Park
41
1
0
20 Dec 2022
Detecting Unintended Social Bias in Toxic Language Datasets
Nihar Ranjan Sahoo
Himanshu Gupta
P. Bhattacharyya
13
17
0
21 Oct 2022
Choose Your Lenses: Flaws in Gender Bias Evaluation
Hadas Orgad
Yonatan Belinkov
27
35
0
20 Oct 2022
Debiasing isn't enough! -- On the Effectiveness of Debiasing MLMs and their Social Biases in Downstream Tasks
Masahiro Kaneko
Danushka Bollegala
Naoaki Okazaki
16
41
0
06 Oct 2022
FairDistillation: Mitigating Stereotyping in Language Models
Pieter Delobelle
Bettina Berendt
20
8
0
10 Jul 2022
Analyzing Gender Representation in Multilingual Models
Hila Gonen
Shauli Ravfogel
Yoav Goldberg
15
11
0
20 Apr 2022
Speciesist Language and Nonhuman Animal Bias in English Masked Language Models
Masashi Takeshita
Rafal Rzepka
K. Araki
24
6
0
10 Mar 2022
A Survey on Gender Bias in Natural Language Processing
Karolina Stañczak
Isabelle Augenstein
28
109
0
28 Dec 2021
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Pieter Delobelle
E. Tokpo
T. Calders
Bettina Berendt
19
25
0
14 Dec 2021
Sustainable Modular Debiasing of Language Models
Anne Lauscher
Tobias Lüken
Goran Glavas
47
120
0
08 Sep 2021
Understanding and Countering Stereotypes: A Computational Approach to the Stereotype Content Model
Kathleen C. Fraser
I. Nejadgholi
S. Kiritchenko
16
37
0
04 Jun 2021
1