Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2010.14534
Cited By
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias
27 October 2020
Marion Bartl
Malvina Nissim
Albert Gatt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias"
50 / 66 papers shown
Title
A Comprehensive Analysis of Large Language Model Outputs: Similarity, Diversity, and Bias
Brandon Smith
Mohamed Reda Bouadjenek
Tahsin Alamgir Kheya
Phillip Dawson
S. Aryal
ALM
ELM
26
0
0
14 May 2025
Developing A Framework to Support Human Evaluation of Bias in Generated Free Response Text
Jennifer Healey
Laurie Byrum
Md Nadeem Akhtar
Surabhi Bhargava
Moumita Sinha
29
0
0
05 May 2025
Assumed Identities: Quantifying Gender Bias in Machine Translation of Ambiguous Occupational Terms
Orfeas Menis-Mastromichalakis
Giorgos Filandrianos
M. Symeonaki
Giorgos Stamou
60
0
0
06 Mar 2025
Rethinking LLM Bias Probing Using Lessons from the Social Sciences
Kirsten N. Morehouse
S. Swaroop
Weiwei Pan
45
1
0
28 Feb 2025
Robust Bias Detection in MLMs and its Application to Human Trait Ratings
Ingroj Shrestha
Louis Tay
Padmini Srinivasan
78
0
0
24 Feb 2025
Detecting Linguistic Bias in Government Documents Using Large language Models
Milena de Swart
Floris den Hengst
Jieying Chen
59
0
0
20 Feb 2025
Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs
Angelina Wang
Michelle Phan
Daniel E. Ho
Sanmi Koyejo
49
2
0
04 Feb 2025
LangFair: A Python Package for Assessing Bias and Fairness in Large Language Model Use Cases
Dylan Bouchard
Mohit Singh Chauhan
David Skarbrevik
Viren Bajaj
Zeya Ahmad
38
0
0
06 Jan 2025
Everyone deserves their voice to be heard: Analyzing Predictive Gender Bias in ASR Models Applied to Dutch Speech Data
Rik Raes
Saskia Lensink
Mykola Pechenizkiy
25
0
0
14 Nov 2024
Collapsed Language Models Promote Fairness
Jingxuan Xu
Wuyang Chen
Linyi Li
Yao Zhao
Yunchao Wei
41
0
0
06 Oct 2024
Are Female Carpenters like Blue Bananas? A Corpus Investigation of Occupation Gender Typicality
Da Ju
Karen Ulrich
Adina Williams
27
2
0
06 Aug 2024
Downstream bias mitigation is all you need
Arkadeep Baksi
Rahul Singh
Tarun Joshi
AI4CE
22
0
0
01 Aug 2024
Understanding the Interplay of Scale, Data, and Bias in Language Models: A Case Study with BERT
Muhammad Ali
Swetasudha Panda
Qinlan Shen
Michael Wick
Ari Kobren
MILM
34
3
0
25 Jul 2024
Exploring Bengali Religious Dialect Biases in Large Language Models with Evaluation Perspectives
Azmine Toushik Wasi
Raima Islam
Mst Rafia Islam
Taki Hasan Rafi
Dong-Kyu Chae
29
3
0
25 Jul 2024
Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words
Yijie Chen
Yijin Liu
Fandong Meng
Jinan Xu
Yufeng Chen
Jie Zhou
34
1
0
23 Jul 2024
Evaluating Nuanced Bias in Large Language Model Free Response Answers
Jennifer Healey
Laurie Byrum
Md Nadeem Akhtar
Moumita Sinha
33
1
0
11 Jul 2024
Leveraging Large Language Models to Measure Gender Bias in Gendered Languages
Erik Derner
Sara Sansalvador de la Fuente
Yoan Gutiérrez
Paloma Moreda
Nuria Oliver
27
1
0
19 Jun 2024
The Life Cycle of Large Language Models: A Review of Biases in Education
Jinsook Lee
Yann Hicke
Renzhe Yu
Christopher A. Brooks
René F. Kizilcec
AI4Ed
34
1
0
03 Jun 2024
Large Language Model Bias Mitigation from the Perspective of Knowledge Editing
Ruizhe Chen
Yichen Li
Zikai Xiao
Zuo-Qiang Liu
KELM
36
13
0
15 May 2024
Fairness in Large Language Models: A Taxonomic Survey
Zhibo Chu
Zichong Wang
Wenbin Zhang
AILaw
43
32
0
31 Mar 2024
Potential and Challenges of Model Editing for Social Debiasing
Jianhao Yan
Futing Wang
Yafu Li
Yue Zhang
KELM
60
9
0
21 Feb 2024
Large Language Models are Geographically Biased
Rohin Manvi
Samar Khanna
Marshall Burke
David B. Lobell
Stefano Ermon
37
39
0
05 Feb 2024
Multilingual Text-to-Image Generation Magnifies Gender Stereotypes and Prompt Engineering May Not Help You
Felix Friedrich
Katharina Hämmerl
P. Schramowski
Manuel Brack
Jindrich Libovický
Kristian Kersting
Alexander M. Fraser
EGVM
24
10
0
29 Jan 2024
Multilingual large language models leak human stereotypes across language boundaries
Yang Trista Cao
Anna Sotnikova
Jieyu Zhao
Linda X. Zou
Rachel Rudinger
Hal Daumé
PILM
23
10
0
12 Dec 2023
Tackling Bias in Pre-trained Language Models: Current Trends and Under-represented Societies
Vithya Yogarajan
Gillian Dobbie
Te Taka Keegan
R. Neuwirth
ALM
43
11
0
03 Dec 2023
Evaluating Large Language Models through Gender and Racial Stereotypes
Ananya Malik
ELM
15
2
0
24 Nov 2023
Benefits and Harms of Large Language Models in Digital Mental Health
Munmun De Choudhury
Sachin R. Pendse
Neha Kumar
LM&MA
AI4MH
22
41
0
07 Nov 2023
Unlocking Bias Detection: Leveraging Transformer-Based Models for Content Analysis
Shaina Raza
Oluwanifemi Bamgbose
Veronica Chatrath
Shardul Ghuge
Yan Sidyakin
Abdullah Y. Muaad
11
11
0
30 Sep 2023
Bias and Fairness in Large Language Models: A Survey
Isabel O. Gallegos
Ryan A. Rossi
Joe Barrow
Md Mehrab Tanjim
Sungchul Kim
Franck Dernoncourt
Tong Yu
Ruiyi Zhang
Nesreen Ahmed
AILaw
19
485
0
02 Sep 2023
CALM : A Multi-task Benchmark for Comprehensive Assessment of Language Model Bias
Vipul Gupta
Pranav Narayanan Venkit
Hugo Laurenccon
Shomir Wilson
R. Passonneau
41
12
0
24 Aug 2023
Mitigating Bias in Conversations: A Hate Speech Classifier and Debiaser with Prompts
Shaina Raza
Chen Ding
D. Pandya
FaML
8
2
0
14 Jul 2023
How Different Is Stereotypical Bias Across Languages?
Ibrahim Tolga Ozturk
R. Nedelchev
C. Heumann
Esteban Garces Arias
Marius Roger
Bernd Bischl
Matthias Aßenmacher
20
2
0
14 Jul 2023
Gender Bias in BERT -- Measuring and Analysing Biases through Sentiment Rating in a Realistic Downstream Classification Task
Sophie F. Jentzsch
Cigdem Turan
13
31
0
27 Jun 2023
Gender Bias in Transformer Models: A comprehensive survey
Praneeth Nemani
Yericherla Deepak Joel
Pallavi Vijay
Farhana Ferdousi Liza
24
3
0
18 Jun 2023
Politeness Stereotypes and Attack Vectors: Gender Stereotypes in Japanese and Korean Language Models
Victor Steinborn
Antonis Maronikolakis
Hinrich Schütze
19
0
0
16 Jun 2023
Sociodemographic Bias in Language Models: A Survey and Forward Path
Vipul Gupta
Pranav Narayanan Venkit
Shomir Wilson
R. Passonneau
42
20
0
13 Jun 2023
Trade-Offs Between Fairness and Privacy in Language Modeling
Cleo Matzken
Steffen Eger
Ivan Habernal
SILM
39
6
0
24 May 2023
On the Independence of Association Bias and Empirical Fairness in Language Models
Laura Cabello
Anna Katrine van Zee
Anders Søgaard
24
25
0
20 Apr 2023
Measuring Gender Bias in West Slavic Language Models
Sandra Martinková
Karolina Stañczak
Isabelle Augenstein
15
8
0
12 Apr 2023
Language Model Behavior: A Comprehensive Survey
Tyler A. Chang
Benjamin Bergen
VLM
LRM
LM&MA
27
103
0
20 Mar 2023
Logic Against Bias: Textual Entailment Mitigates Stereotypical Sentence Reasoning
Hongyin Luo
James R. Glass
NAI
21
7
0
10 Mar 2023
BiasTestGPT: Using ChatGPT for Social Bias Testing of Language Models
Rafal Kocielnik
Shrimai Prabhumoye
Vivian Zhang
Roy Jiang
R. Alvarez
Anima Anandkumar
30
6
0
14 Feb 2023
SensePOLAR: Word sense aware interpretability for pre-trained contextual word embeddings
Jan Engler
Sandipan Sikdar
Marlene Lutz
M. Strohmaier
24
7
0
11 Jan 2023
Can Current Task-oriented Dialogue Models Automate Real-world Scenarios in the Wild?
Sang-Woo Lee
Sungdong Kim
Donghyeon Ko
Dong-hyun Ham
Youngki Hong
...
Wangkyo Jung
Kyunghyun Cho
Donghyun Kwak
H. Noh
W. Park
41
1
0
20 Dec 2022
HERB: Measuring Hierarchical Regional Bias in Pre-trained Language Models
Yizhi Li
Ge Zhang
Bohao Yang
Chenghua Lin
Shi Wang
Anton Ragni
Jie Fu
22
9
0
05 Nov 2022
MABEL: Attenuating Gender Bias using Textual Entailment Data
Jacqueline He
Mengzhou Xia
C. Fellbaum
Danqi Chen
19
32
0
26 Oct 2022
Detecting Unintended Social Bias in Toxic Language Datasets
Nihar Ranjan Sahoo
Himanshu Gupta
P. Bhattacharyya
13
17
0
21 Oct 2022
Choose Your Lenses: Flaws in Gender Bias Evaluation
Hadas Orgad
Yonatan Belinkov
27
35
0
20 Oct 2022
Debiasing isn't enough! -- On the Effectiveness of Debiasing MLMs and their Social Biases in Downstream Tasks
Masahiro Kaneko
Danushka Bollegala
Naoaki Okazaki
18
41
0
06 Oct 2022
Efficient Gender Debiasing of Pre-trained Indic Language Models
Neeraja Kirtane
V. Manushree
Aditya Kane
6
3
0
08 Sep 2022
1
2
Next