Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1906.07337
Cited By
Measuring Bias in Contextualized Word Representations
18 June 2019
Keita Kurita
Nidhi Vyas
Ayush Pareek
A. Black
Yulia Tsvetkov
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Measuring Bias in Contextualized Word Representations"
50 / 272 papers shown
Title
Evaluating Biased Attitude Associations of Language Models in an Intersectional Context
Shiva Omrani Sabbaghi
Robert Wolfe
Aylin Caliskan
26
22
0
07 Jul 2023
On Evaluating and Mitigating Gender Biases in Multilingual Settings
Aniket Vashishtha
Kabir Ahuja
Sunayana Sitaram
13
23
0
04 Jul 2023
CBBQ: A Chinese Bias Benchmark Dataset Curated with Human-AI Collaboration for Large Language Models
Yufei Huang
Deyi Xiong
ALM
34
17
0
28 Jun 2023
Gender Bias in BERT -- Measuring and Analysing Biases through Sentiment Rating in a Realistic Downstream Classification Task
Sophie F. Jentzsch
Cigdem Turan
13
31
0
27 Jun 2023
Opportunities and Risks of LLMs for Scalable Deliberation with Polis
Christopher T. Small
Ivan Vendrov
Esin Durmus
Hadjar Homaei
Elizabeth Barry
Julien Cornebise
Ted Suzman
Deep Ganguli
Colin Megill
24
26
0
20 Jun 2023
TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models
Yue Huang
Qihui Zhang
Philip S. Y
Lichao Sun
13
46
0
20 Jun 2023
Sociodemographic Bias in Language Models: A Survey and Forward Path
Vipul Gupta
Pranav Narayanan Venkit
Shomir Wilson
R. Passonneau
42
20
0
13 Jun 2023
Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks
Katelyn Mei
Sonia Fereidooni
Aylin Caliskan
14
45
0
08 Jun 2023
Language Models Get a Gender Makeover: Mitigating Gender Bias with Few-Shot Data Interventions
Himanshu Thakur
Atishay Jain
Praneetha Vaddamanu
Paul Pu Liang
Louis-Philippe Morency
14
29
0
07 Jun 2023
MISGENDERED: Limits of Large Language Models in Understanding Pronouns
Tamanna Hossain
Sunipa Dev
Sameer Singh
AILaw
25
34
0
06 Jun 2023
An Invariant Learning Characterization of Controlled Text Generation
Carolina Zheng
Claudia Shi
Keyon Vafa
Amir Feder
David M. Blei
OOD
22
8
0
31 May 2023
Are Fairy Tales Fair? Analyzing Gender Bias in Temporal Narrative Event Chains of Children's Fairy Tales
Paulina Toro Isaza
Guangxuan Xu
Akintoye Oloko
Yufang Hou
Nanyun Peng
Dakuo Wang
13
4
0
26 May 2023
An Efficient Multilingual Language Model Compression through Vocabulary Trimming
Asahi Ushio
Yi Zhou
Jose Camacho-Collados
39
7
0
24 May 2023
Trade-Offs Between Fairness and Privacy in Language Modeling
Cleo Matzken
Steffen Eger
Ivan Habernal
SILM
39
6
0
24 May 2023
Gender Biases in Automatic Evaluation Metrics for Image Captioning
Haoyi Qiu
Zi-Yi Dou
Tianlu Wang
Asli Celikyilmaz
Nanyun Peng
EGVM
24
14
0
24 May 2023
Language-Agnostic Bias Detection in Language Models with Bias Probing
Abdullatif Köksal
Omer F. Yalcin
Ahmet Akbiyik
M. Kilavuz
Anna Korhonen
Hinrich Schütze
23
1
0
22 May 2023
Multilingual Holistic Bias: Extending Descriptors and Patterns to Unveil Demographic Biases in Languages at Scale
Marta R. Costa-jussá
Pierre Yves Andrews
Eric Michael Smith
Prangthip Hansanti
C. Ropers
Elahe Kalbassi
Cynthia Gao
Daniel Licht
Carleigh Wood
32
15
0
22 May 2023
Cognitive network science reveals bias in GPT-3, ChatGPT, and GPT-4 mirroring math anxiety in high-school students
Katherine Abramski
Salvatore Citraro
Luigi Lombardi
Giulio Rossetti
Massimo Stella
15
5
0
22 May 2023
On Bias and Fairness in NLP: Investigating the Impact of Bias and Debiasing in Language Models on the Fairness of Toxicity Detection
Fatma Elsafoury
Stamos Katsigiannis
30
1
0
22 May 2023
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language Models
Seraphina Goldfarb-Tarrant
Eddie L. Ungless
Esma Balkir
Su Lin Blodgett
29
9
0
22 May 2023
In the Name of Fairness: Assessing the Bias in Clinical Record De-identification
Yuxin Xiao
S. Lim
Tom Pollard
Marzyeh Ghassemi
13
12
0
18 May 2023
From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models
Shangbin Feng
Chan Young Park
Yuhan Liu
Yulia Tsvetkov
19
227
0
15 May 2023
StarCoder: may the source be with you!
Raymond Li
Loubna Ben Allal
Yangtian Zi
Niklas Muennighoff
Denis Kocetkov
...
Sean M. Hughes
Thomas Wolf
Arjun Guha
Leandro von Werra
H. D. Vries
37
713
0
09 May 2023
On the Independence of Association Bias and Empirical Fairness in Language Models
Laura Cabello
Anna Katrine van Zee
Anders Søgaard
24
25
0
20 Apr 2023
An Evaluation on Large Language Model Outputs: Discourse and Memorization
Adrian de Wynter
Xun Wang
Alex Sokolov
Qilong Gu
Si-Qing Chen
ELM
74
32
0
17 Apr 2023
Evaluation of Social Biases in Recent Large Pre-Trained Models
Swapnil Sharma
Nikita Anand
V. KranthiKiranG.
Alind Jain
16
0
0
13 Apr 2023
Measuring Gender Bias in West Slavic Language Models
Sandra Martinková
Karolina Stañczak
Isabelle Augenstein
15
8
0
12 Apr 2023
Toxicity in ChatGPT: Analyzing Persona-assigned Language Models
A. Deshpande
Vishvak Murahari
Tanmay Rajpurohit
A. Kalyan
Karthik Narasimhan
LM&MA
LLMAG
11
334
0
11 Apr 2023
Language Model Behavior: A Comprehensive Survey
Tyler A. Chang
Benjamin Bergen
VLM
LRM
LM&MA
27
102
0
20 Mar 2023
Model Sketching: Centering Concepts in Early-Stage Machine Learning Model Design
Michelle S. Lam
Zixian Ma
Anne Li
Izequiel Freitas
Dakuo Wang
James A. Landay
Michael S. Bernstein
164
22
0
06 Mar 2023
Toward Fairness in Text Generation via Mutual Information Minimization based on Importance Sampling
Rui Wang
Pengyu Cheng
Ricardo Henao
12
8
0
25 Feb 2023
In-Depth Look at Word Filling Societal Bias Measures
Matúš Pikuliak
Ivana Benová
Viktor Bachratý
21
9
0
24 Feb 2023
Fairness in Language Models Beyond English: Gaps and Challenges
Krithika Ramesh
Sunayana Sitaram
Monojit Choudhury
30
23
0
24 Feb 2023
The Capacity for Moral Self-Correction in Large Language Models
Deep Ganguli
Amanda Askell
Nicholas Schiefer
Thomas I. Liao
Kamil.e Lukovsiut.e
...
Tom B. Brown
C. Olah
Jack Clark
Sam Bowman
Jared Kaplan
LRM
ReLM
31
158
0
15 Feb 2023
BiasTestGPT: Using ChatGPT for Social Bias Testing of Language Models
Rafal Kocielnik
Shrimai Prabhumoye
Vivian Zhang
Roy Jiang
R. Alvarez
Anima Anandkumar
30
6
0
14 Feb 2023
Nationality Bias in Text Generation
Pranav Narayanan Venkit
Sanjana Gautam
Ruchi Panchanadikar
Ting-Hao 'Kenneth' Huang
Shomir Wilson
22
51
0
05 Feb 2023
How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification
E. Tokpo
Pieter Delobelle
Bettina Berendt
T. Calders
35
7
0
30 Jan 2023
Comparing Intrinsic Gender Bias Evaluation Measures without using Human Annotated Examples
Masahiro Kaneko
Danushka Bollegala
Naoaki Okazaki
24
9
0
28 Jan 2023
An Empirical Study of Metrics to Measure Representational Harms in Pre-Trained Language Models
Saghar Hosseini
Hamid Palangi
Ahmed Hassan Awadallah
22
21
0
22 Jan 2023
Ensemble Transfer Learning for Multilingual Coreference Resolution
T. Lai
Heng Ji
13
1
0
22 Jan 2023
Dissociating language and thought in large language models
Kyle Mahowald
Anna A. Ivanova
I. Blank
Nancy Kanwisher
J. Tenenbaum
Evelina Fedorenko
ELM
ReLM
23
209
0
16 Jan 2023
Generalizable Natural Language Processing Framework for Migraine Reporting from Social Media
Yuting Guo
Swati Rajwal
S. Lakamana
Chia-Chun Chiang
P. Menell
...
Wan-ju Chao
C. Chao
T. Schwedt
Imon Banerjee
A. Sarker
11
6
0
23 Dec 2022
Trustworthy Social Bias Measurement
Rishi Bommasani
Percy Liang
27
10
0
20 Dec 2022
The effects of gender bias in word embeddings on depression prediction
Gizem Sogancioglu
Heysem Kaya
16
3
0
15 Dec 2022
Training Data Influence Analysis and Estimation: A Survey
Zayd Hammoudeh
Daniel Lowd
TDI
29
82
0
09 Dec 2022
Undesirable Biases in NLP: Addressing Challenges of Measurement
Oskar van der Wal
Dominik Bachmann
Alina Leidinger
L. Maanen
Willem H. Zuidema
K. Schulz
17
6
0
24 Nov 2022
Validating Large Language Models with ReLM
Michael Kuchnik
Virginia Smith
George Amvrosiadis
21
27
0
21 Nov 2022
Conceptor-Aided Debiasing of Large Language Models
Yifei Li
Lyle Ungar
João Sedoc
6
4
0
20 Nov 2022
Galactica: A Large Language Model for Science
Ross Taylor
Marcin Kardas
Guillem Cucurull
Thomas Scialom
Anthony Hartshorn
Elvis Saravia
Andrew Poulton
Viktor Kerkez
Robert Stojnic
ELM
ReLM
32
725
0
16 Nov 2022
Mind Your Bias: A Critical Review of Bias Detection Methods for Contextual Language Models
Silke Husse
Andreas Spitz
11
6
0
15 Nov 2022
Previous
1
2
3
4
5
6
Next