ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.01485
  4. Cited By
Assessing Social and Intersectional Biases in Contextualized Word
  Representations

Assessing Social and Intersectional Biases in Contextualized Word Representations

Neural Information Processing Systems (NeurIPS), 2019
4 November 2019
Y. Tan
Elisa Celis
    FaML
ArXiv (abs)PDFHTML

Papers citing "Assessing Social and Intersectional Biases in Contextualized Word Representations"

42 / 142 papers shown
CM3: A Causal Masked Multimodal Model of the Internet
CM3: A Causal Masked Multimodal Model of the Internet
Armen Aghajanyan
Po-Yao (Bernie) Huang
Candace Ross
Vladimir Karpukhin
Hu Xu
...
Dmytro Okhonko
Mandar Joshi
Gargi Ghosh
M. Lewis
Luke Zettlemoyer
367
169
0
19 Jan 2022
Unintended Bias in Language Model-driven Conversational Recommendation
Unintended Bias in Language Model-driven Conversational Recommendation
Tianshu Shen
Jiaru Li
Mohamed Reda Bouadjenek
Zheda Mai
Scott Sanner
202
7
0
17 Jan 2022
A Survey on Gender Bias in Natural Language Processing
A Survey on Gender Bias in Natural Language Processing
Karolina Stañczak
Isabelle Augenstein
237
145
0
28 Dec 2021
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in
  Pretrained Language Models
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Pieter Delobelle
E. Tokpo
T. Calders
Bettina Berendt
152
27
0
14 Dec 2021
Extending the WILDS Benchmark for Unsupervised Adaptation
Extending the WILDS Benchmark for Unsupervised Adaptation
Shiori Sagawa
Pang Wei Koh
Tony Lee
Irena Gao
Sang Michael Xie
...
Kate Saenko
Tatsunori Hashimoto
Sergey Levine
Chelsea Finn
Abigail Z. Jacobs
OOD
242
113
0
09 Dec 2021
SynthBio: A Case Study in Human-AI Collaborative Curation of Text
  Datasets
SynthBio: A Case Study in Human-AI Collaborative Curation of Text Datasets
Ann Yuan
Daphne Ippolito
Vitaly Nikolaev
Chris Callison-Burch
Andy Coenen
Sebastian Gehrmann
SyDa
347
24
0
11 Nov 2021
Improving Gender Fairness of Pre-Trained Language Models without
  Catastrophic Forgetting
Improving Gender Fairness of Pre-Trained Language Models without Catastrophic ForgettingAnnual Meeting of the Association for Computational Linguistics (ACL), 2021
Zahra Fatemi
Chen Xing
Wenhao Liu
Caiming Xiong
CLL
301
42
0
11 Oct 2021
Stepmothers are mean and academics are pretentious: What do pretrained
  language models learn about you?
Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you?
Rochelle Choenni
Ekaterina Shutova
R. Rooij
206
30
0
21 Sep 2021
Simple Entity-Centric Questions Challenge Dense Retrievers
Simple Entity-Centric Questions Challenge Dense Retrievers
Christopher Sciavolino
Zexuan Zhong
Jinhyuk Lee
Danqi Chen
RALM
460
191
0
17 Sep 2021
Hi, my name is Martha: Using names to measure and mitigate bias in
  generative dialogue models
Hi, my name is Martha: Using names to measure and mitigate bias in generative dialogue models
Eric Michael Smith
Adina Williams
246
31
0
07 Sep 2021
Boosting Search Engines with Interactive Agents
Boosting Search Engines with Interactive Agents
Leonard Adolphs
Benjamin Boerschinger
Christian Buck
Michelle Chen Huebscher
Massimiliano Ciaramita
...
Thomas Hofmann
Yannic Kilcher
Sascha Rothe
Pier Giuseppe Sessa
Lierni Sestorain Saralegui
LLMAG
341
24
0
01 Sep 2021
Harms of Gender Exclusivity and Challenges in Non-Binary Representation
  in Language Technologies
Harms of Gender Exclusivity and Challenges in Non-Binary Representation in Language TechnologiesConference on Empirical Methods in Natural Language Processing (EMNLP), 2021
Sunipa Dev
Masoud Monajatipoor
Anaelia Ovalle
Arjun Subramonian
J. M. Phillips
Kai-Wei Chang
330
196
0
27 Aug 2021
On the Interaction of Belief Bias and Explanations
On the Interaction of Belief Bias and ExplanationsFindings (Findings), 2021
Ana Valeria González
Anna Rogers
Anders Søgaard
FAtt
218
20
0
29 Jun 2021
Towards Understanding and Mitigating Social Biases in Language Models
Towards Understanding and Mitigating Social Biases in Language Models
Paul Pu Liang
Chiyu Wu
Louis-Philippe Morency
Ruslan Salakhutdinov
312
475
0
24 Jun 2021
A Survey of Race, Racism, and Anti-Racism in NLP
A Survey of Race, Racism, and Anti-Racism in NLPAnnual Meeting of the Association for Computational Linguistics (ACL), 2021
Anjalie Field
Su Lin Blodgett
Zeerak Talat
Yulia Tsvetkov
328
142
0
21 Jun 2021
Does Robustness Improve Fairness? Approaching Fairness with Word
  Substitution Robustness Methods for Text Classification
Does Robustness Improve Fairness? Approaching Fairness with Word Substitution Robustness Methods for Text ClassificationFindings (Findings), 2021
Yada Pruksachatkun
Satyapriya Krishna
Jwala Dhamala
Rahul Gupta
Kai-Wei Chang
171
33
0
21 Jun 2021
Unmasking the Mask -- Evaluating Social Biases in Masked Language Models
Unmasking the Mask -- Evaluating Social Biases in Masked Language ModelsAAAI Conference on Artificial Intelligence (AAAI), 2021
Masahiro Kaneko
Danushka Bollegala
234
86
0
15 Apr 2021
Low-Complexity Probing via Finding Subnetworks
Low-Complexity Probing via Finding SubnetworksNorth American Chapter of the Association for Computational Linguistics (NAACL), 2021
Steven Cao
Victor Sanh
Alexander M. Rush
205
68
0
08 Apr 2021
Alignment of Language Agents
Alignment of Language Agents
Zachary Kenton
Tom Everitt
Laura Weidinger
Iason Gabriel
Vladimir Mikulik
G. Irving
247
206
0
26 Mar 2021
Large Pre-trained Language Models Contain Human-like Biases of What is
  Right and Wrong to Do
Large Pre-trained Language Models Contain Human-like Biases of What is Right and Wrong to DoNature Machine Intelligence (Nat. Mach. Intell.), 2021
P. Schramowski
Cigdem Turan
Nico Andersen
Constantin Rothkopf
Kristian Kersting
317
359
0
08 Mar 2021
WordBias: An Interactive Visual Tool for Discovering Intersectional
  Biases Encoded in Word Embeddings
WordBias: An Interactive Visual Tool for Discovering Intersectional Biases Encoded in Word Embeddings
Bhavya Ghai
Md. Naimul Hoque
Klaus Mueller
188
28
0
05 Mar 2021
Bias Out-of-the-Box: An Empirical Analysis of Intersectional
  Occupational Biases in Popular Generative Language Models
Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language ModelsNeural Information Processing Systems (NeurIPS), 2021
Hannah Rose Kirk
Yennie Jun
Haider Iqbal
Elias Benussi
Filippo Volpin
F. Dreyer
Aleksandar Shtedritski
Yuki M. Asano
276
223
0
08 Feb 2021
Disembodied Machine Learning: On the Illusion of Objectivity in NLP
Disembodied Machine Learning: On the Illusion of Objectivity in NLP
Zeerak Talat
Smarika Lulz
Joachim Bingel
Isabelle Augenstein
257
55
0
28 Jan 2021
Stereotype and Skew: Quantifying Gender Bias in Pre-trained and
  Fine-tuned Language Models
Stereotype and Skew: Quantifying Gender Bias in Pre-trained and Fine-tuned Language ModelsConference of the European Chapter of the Association for Computational Linguistics (EACL), 2021
Daniel de Vassimon Manela
D. Errington
Thomas Fisher
B. V. Breugel
Pasquale Minervini
222
101
0
24 Jan 2021
Debiasing Pre-trained Contextualised Embeddings
Debiasing Pre-trained Contextualised EmbeddingsConference of the European Chapter of the Association for Computational Linguistics (EACL), 2021
Masahiro Kaneko
Danushka Bollegala
507
152
0
23 Jan 2021
The Geometry of Distributed Representations for Better Alignment,
  Attenuated Bias, and Improved Interpretability
The Geometry of Distributed Representations for Better Alignment, Attenuated Bias, and Improved Interpretability
Sunipa Dev
269
1
0
25 Nov 2020
Unequal Representations: Analyzing Intersectional Biases in Word
  Embeddings Using Representational Similarity Analysis
Unequal Representations: Analyzing Intersectional Biases in Word Embeddings Using Representational Similarity AnalysisInternational Conference on Computational Linguistics (COLING), 2020
Michael A. Lepori
196
17
0
24 Nov 2020
Fairness and Robustness in Invariant Learning: A Case Study in Toxicity
  Classification
Fairness and Robustness in Invariant Learning: A Case Study in Toxicity Classification
Robert Adragna
Elliot Creager
David Madras
R. Zemel
OODFaML
225
45
0
12 Nov 2020
Image Representations Learned With Unsupervised Pre-Training Contain
  Human-like Biases
Image Representations Learned With Unsupervised Pre-Training Contain Human-like BiasesConference on Fairness, Accountability and Transparency (FAccT), 2020
Ryan Steed
Aylin Caliskan
SSL
336
173
0
28 Oct 2020
Mitigating Gender Bias in Machine Translation with Target Gender
  Annotations
Mitigating Gender Bias in Machine Translation with Target Gender Annotations
Arturs Stafanovivcs
Toms Bergmanis
Marcis Pinnis
164
68
0
13 Oct 2020
Measuring and Reducing Gendered Correlations in Pre-trained Models
Measuring and Reducing Gendered Correlations in Pre-trained Models
Kellie Webster
Xuezhi Wang
Ian Tenney
Alex Beutel
Emily Pitler
Ellie Pavlick
Jilin Chen
Ed Chi
Slav Petrov
FaML
530
296
0
12 Oct 2020
UnQovering Stereotyping Biases via Underspecified Questions
UnQovering Stereotyping Biases via Underspecified QuestionsFindings (Findings), 2020
Tao Li
Tushar Khot
Daniel Khashabi
Ashish Sabharwal
Vivek Srikumar
313
154
0
06 Oct 2020
BERTology Meets Biology: Interpreting Attention in Protein Language
  Models
BERTology Meets Biology: Interpreting Attention in Protein Language Models
Jesse Vig
Ali Madani
Lav Varshney
Caiming Xiong
R. Socher
Nazneen Rajani
411
336
0
26 Jun 2020
Large image datasets: A pyrrhic win for computer vision?
Large image datasets: A pyrrhic win for computer vision?IEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2020
Vinay Uday Prabhu
Abeba Birhane
334
406
0
24 Jun 2020
SqueezeBERT: What can computer vision teach NLP about efficient neural
  networks?
SqueezeBERT: What can computer vision teach NLP about efficient neural networks?
F. Iandola
Albert Eaton Shaw
Ravi Krishna
Kurt Keutzer
VLM
259
136
0
19 Jun 2020
Detecting Emergent Intersectional Biases: Contextualized Word Embeddings
  Contain a Distribution of Human-like Biases
Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases
W. Guo
Aylin Caliskan
377
272
0
06 Jun 2020
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
Language (Technology) is Power: A Critical Survey of "Bias" in NLPAnnual Meeting of the Association for Computational Linguistics (ACL), 2020
Su Lin Blodgett
Solon Barocas
Hal Daumé
Hanna M. Wallach
877
1,510
0
28 May 2020
Cyberbullying Detection with Fairness Constraints
Cyberbullying Detection with Fairness Constraints
O. Gencoglu
245
58
0
09 May 2020
On the Relationships Between the Grammatical Genders of Inanimate Nouns
  and Their Co-Occurring Adjectives and Verbs
On the Relationships Between the Grammatical Genders of Inanimate Nouns and Their Co-Occurring Adjectives and VerbsTransactions of the Association for Computational Linguistics (TACL), 2020
Adina Williams
Robert Bamler
Lawrence Wolf-Sonkin
Damián E. Blasi
Hanna M. Wallach
180
20
0
03 May 2020
Hurtful Words: Quantifying Biases in Clinical Contextual Word Embeddings
Hurtful Words: Quantifying Biases in Clinical Contextual Word EmbeddingsACM Conference on Health, Inference, and Learning (CHIL), 2020
H. Zhang
Amy X. Lu
Mohamed Abdalla
Matthew B. A. McDermott
Marzyeh Ghassemi
265
196
0
11 Mar 2020
Measuring Social Biases in Grounded Vision and Language Embeddings
Measuring Social Biases in Grounded Vision and Language EmbeddingsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2020
Candace Ross
Boris Katz
Andrei Barbu
309
69
0
20 Feb 2020
Taking a Stance on Fake News: Towards Automatic Disinformation
  Assessment via Deep Bidirectional Transformer Language Models for Stance
  Detection
Taking a Stance on Fake News: Towards Automatic Disinformation Assessment via Deep Bidirectional Transformer Language Models for Stance Detection
Chris Dulhanty
Jason L. Deglint
Ibrahim Ben Daya
A. Wong
149
23
0
27 Nov 2019
Previous
123
Page 3 of 3