ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.03310
  4. Cited By
Gender Bias in Contextualized Word Embeddings

Gender Bias in Contextualized Word Embeddings

5 April 2019
Jieyu Zhao
Tianlu Wang
Mark Yatskar
Ryan Cotterell
Vicente Ordonez
Kai-Wei Chang
    FaML
ArXivPDFHTML

Papers citing "Gender Bias in Contextualized Word Embeddings"

20 / 70 papers shown
Title
Dictionary-based Debiasing of Pre-trained Word Embeddings
Dictionary-based Debiasing of Pre-trained Word Embeddings
Masahiro Kaneko
Danushka Bollegala
FaML
22
39
0
23 Jan 2021
Debiasing Pre-trained Contextualised Embeddings
Debiasing Pre-trained Contextualised Embeddings
Masahiro Kaneko
Danushka Bollegala
210
138
0
23 Jan 2021
Adversarial Black-Box Attacks On Text Classifiers Using Multi-Objective
  Genetic Optimization Guided By Deep Networks
Adversarial Black-Box Attacks On Text Classifiers Using Multi-Objective Genetic Optimization Guided By Deep Networks
Alex Mathai
Shreya Khare
Srikanth G. Tamilselvam
Senthil Mani
AAML
20
6
0
08 Nov 2020
Two Simple Ways to Learn Individual Fairness Metrics from Data
Two Simple Ways to Learn Individual Fairness Metrics from Data
Debarghya Mukherjee
Mikhail Yurochkin
Moulinath Banerjee
Yuekai Sun
FaML
21
96
0
19 Jun 2020
Demoting Racial Bias in Hate Speech Detection
Demoting Racial Bias in Hate Speech Detection
Mengzhou Xia
Anjalie Field
Yulia Tsvetkov
20
116
0
25 May 2020
Towards Socially Responsible AI: Cognitive Bias-Aware Multi-Objective
  Learning
Towards Socially Responsible AI: Cognitive Bias-Aware Multi-Objective Learning
Procheta Sen
Debasis Ganguly
14
18
0
14 May 2020
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Tianlu Wang
Xi Victoria Lin
Nazneen Rajani
Bryan McCann
Vicente Ordonez
Caimng Xiong
CVBM
137
54
0
03 May 2020
Beneath the Tip of the Iceberg: Current Challenges and New Directions in
  Sentiment Analysis Research
Beneath the Tip of the Iceberg: Current Challenges and New Directions in Sentiment Analysis Research
Soujanya Poria
Devamanyu Hazarika
Navonil Majumder
Rada Mihalcea
37
207
0
01 May 2020
Algorithmic Fairness
Algorithmic Fairness
Dana Pessach
E. Shmueli
FaML
14
387
0
21 Jan 2020
RobBERT: a Dutch RoBERTa-based Language Model
RobBERT: a Dutch RoBERTa-based Language Model
Pieter Delobelle
Thomas Winters
Bettina Berendt
10
232
0
17 Jan 2020
Generating Interactive Worlds with Text
Generating Interactive Worlds with Text
Angela Fan
Jack Urbanek
Pratik Ringshia
Emily Dinan
Emma Qian
...
Shrimai Prabhumoye
Douwe Kiela
Tim Rocktaschel
Arthur Szlam
Jason Weston
10
27
0
20 Nov 2019
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Emily Dinan
Angela Fan
Adina Williams
Jack Urbanek
Douwe Kiela
Jason Weston
22
205
0
10 Nov 2019
Coreference Resolution as Query-based Span Prediction
Coreference Resolution as Query-based Span Prediction
Wei Yu Wu
Fei Wang
Arianna Yuan
Fei Wu
Jiwei Li
LRM
17
180
0
05 Nov 2019
Assessing Social and Intersectional Biases in Contextualized Word
  Representations
Assessing Social and Intersectional Biases in Contextualized Word Representations
Y. Tan
Elisa Celis
FaML
16
223
0
04 Nov 2019
Man is to Person as Woman is to Location: Measuring Gender Bias in Named
  Entity Recognition
Man is to Person as Woman is to Location: Measuring Gender Bias in Named Entity Recognition
Ninareh Mehrabi
Thamme Gowda
Fred Morstatter
Nanyun Peng
Aram Galstyan
10
57
0
24 Oct 2019
A Neural Entity Coreference Resolution Review
A Neural Entity Coreference Resolution Review
Nikolaos Stylianou
I. Vlahavas
8
38
0
21 Oct 2019
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
296
4,203
0
23 Aug 2019
Good Secretaries, Bad Truck Drivers? Occupational Gender Stereotypes in
  Sentiment Analysis
Good Secretaries, Bad Truck Drivers? Occupational Gender Stereotypes in Sentiment Analysis
J. Bhaskaran
Isha Bhallamudi
19
46
0
24 Jun 2019
Measuring Bias in Contextualized Word Representations
Measuring Bias in Contextualized Word Representations
Keita Kurita
Nidhi Vyas
Ayush Pareek
A. Black
Yulia Tsvetkov
37
443
0
18 Jun 2019
Conceptor Debiasing of Word Representations Evaluated on WEAT
Conceptor Debiasing of Word Representations Evaluated on WEAT
S. Karve
Lyle Ungar
João Sedoc
FaML
14
33
0
14 Jun 2019
Previous
12