Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2112.07447
Cited By
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
14 December 2021
Pieter Delobelle
E. Tokpo
T. Calders
Bettina Berendt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models"
18 / 18 papers shown
Title
A Note on Bias to Complete
Jia Xu
Mona Diab
39
1
0
18 Feb 2024
Semantic Properties of cosine based bias scores for word embeddings
Sarah Schröder
Alexander Schulz
Fabian Hinder
Barbara Hammer
21
1
0
27 Jan 2024
Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models
Adhithya Saravanan
Rafal Kocielnik
Roy Jiang
P. Han
A. Anandkumar
16
7
0
05 Dec 2023
Selecting Shots for Demographic Fairness in Few-Shot Learning with Large Language Models
Carlos Alejandro Aguirre
Kuleen Sasse
Isabel Cachola
Mark Dredze
16
1
0
14 Nov 2023
Evaluating Bias and Fairness in Gender-Neutral Pretrained Vision-and-Language Models
Laura Cabello
Emanuele Bugliarello
Stephanie Brandl
Desmond Elliott
16
7
0
26 Oct 2023
TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models
Yue Huang
Qihui Zhang
Philip S. Y
Lichao Sun
11
46
0
20 Jun 2023
On the Independence of Association Bias and Empirical Fairness in Language Models
Laura Cabello
Anna Katrine van Zee
Anders Søgaard
16
25
0
20 Apr 2023
In-Depth Look at Word Filling Societal Bias Measures
Matúš Pikuliak
Ivana Benová
Viktor Bachratý
13
9
0
24 Feb 2023
Debiasing Methods for Fairer Neural Models in Vision and Language Research: A Survey
Otávio Parraga
Martin D. Móre
C. M. Oliveira
Nathan Gavenski
L. S. Kupssinskü
Adilson Medronha
L. V. Moura
Gabriel S. Simões
Rodrigo C. Barros
26
11
0
10 Nov 2022
Bridging Fairness and Environmental Sustainability in Natural Language Processing
Marius Hessenthaler
Emma Strubell
Dirk Hovy
Anne Lauscher
11
8
0
08 Nov 2022
HERB: Measuring Hierarchical Regional Bias in Pre-trained Language Models
Yizhi Li
Ge Zhang
Bohao Yang
Chenghua Lin
Shi Wang
Anton Ragni
Jie Fu
14
9
0
05 Nov 2022
Choose Your Lenses: Flaws in Gender Bias Evaluation
Hadas Orgad
Yonatan Belinkov
20
35
0
20 Oct 2022
Debiasing isn't enough! -- On the Effectiveness of Debiasing MLMs and their Social Biases in Downstream Tasks
Masahiro Kaneko
Danushka Bollegala
Naoaki Okazaki
14
41
0
06 Oct 2022
Debiasing Word Embeddings with Nonlinear Geometry
Lu Cheng
Nayoung Kim
Huan Liu
11
5
0
29 Aug 2022
FairDistillation: Mitigating Stereotyping in Language Models
Pieter Delobelle
Bettina Berendt
20
8
0
10 Jul 2022
"I'm sorry to hear that": Finding New Biases in Language Models with a Holistic Descriptor Dataset
Eric Michael Smith
Melissa Hall
Melanie Kambadur
Eleonora Presani
Adina Williams
62
128
0
18 May 2022
RobBERTje: a Distilled Dutch BERT Model
Pieter Delobelle
Thomas Winters
Bettina Berendt
8
14
0
28 Apr 2022
Efficient Estimation of Word Representations in Vector Space
Tomáš Mikolov
Kai Chen
G. Corrado
J. Dean
3DV
228
29,632
0
16 Jan 2013
1