Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2401.08511
Cited By
The Gaps between Pre-train and Downstream Settings in Bias Evaluation and Debiasing
16 January 2024
Masahiro Kaneko
Danushka Bollegala
Timothy Baldwin
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The Gaps between Pre-train and Downstream Settings in Bias Evaluation and Debiasing"
6 / 6 papers shown
Title
Evaluating Gender Bias of Pre-trained Language Models in Natural Language Inference by Considering All Labels
Panatchakorn Anantaprayoon
Masahiro Kaneko
Naoaki Okazaki
56
16
0
18 Sep 2023
The Impact of Debiasing on the Performance of Language Models in Downstream Tasks is Underestimated
Masahiro Kaneko
Danushka Bollegala
Naoaki Okazaki
26
5
0
16 Sep 2023
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
Minghao Wu
Abdul Waheed
Chiyu Zhang
Muhammad Abdul-Mageed
Alham Fikri Aji
ALM
127
118
0
27 Apr 2023
BBQ: A Hand-Built Bias Benchmark for Question Answering
Alicia Parrish
Angelica Chen
Nikita Nangia
Vishakh Padmakumar
Jason Phang
Jana Thompson
Phu Mon Htut
Sam Bowman
212
367
0
15 Oct 2021
Self-Diagnosis and Self-Debiasing: A Proposal for Reducing Corpus-Based Bias in NLP
Timo Schick
Sahana Udupa
Hinrich Schütze
257
374
0
28 Feb 2021
Debiasing Pre-trained Contextualised Embeddings
Masahiro Kaneko
Danushka Bollegala
210
138
0
23 Jan 2021
1