ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1607.06520
  4. Cited By
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word
  Embeddings

Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

21 July 2016
Tolga Bolukbasi
Kai-Wei Chang
James Zou
Venkatesh Saligrama
Adam Kalai
    CVBMFaML
ArXiv (abs)PDFHTML

Papers citing "Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings"

50 / 778 papers shown
Title
Don't Take the Premise for Granted: Mitigating Artifacts in Natural
  Language Inference
Don't Take the Premise for Granted: Mitigating Artifacts in Natural Language Inference
Yonatan Belinkov
Adam Poliak
Stuart M. Shieber
Benjamin Van Durme
Alexander M. Rush
99
95
0
09 Jul 2019
Toward Fairness in AI for People with Disabilities: A Research Roadmap
Toward Fairness in AI for People with Disabilities: A Research Roadmap
Anhong Guo
Ece Kamar
Jennifer Wortman Vaughan
Hanna M. Wallach
Meredith Ringel Morris
99
118
0
04 Jul 2019
Training individually fair ML models with Sensitive Subspace Robustness
Training individually fair ML models with Sensitive Subspace Robustness
Mikhail Yurochkin
Amanda Bower
Yuekai Sun
FaMLOOD
88
120
0
28 Jun 2019
Statistical Learning from Biased Training Samples
Statistical Learning from Biased Training Samples
Stephan Clémençon
Pierre Laforgue
110
9
0
28 Jun 2019
Rényi Fair Inference
Rényi Fair Inference
Sina Baharlouei
Maher Nouiehed
Ahmad Beirami
Meisam Razaviyayn
FaML
66
67
0
28 Jun 2019
On the Coherence of Fake News Articles
On the Coherence of Fake News Articles
Iknoor Singh
P Deepak
Anoop Kadan
GNN
20
10
0
26 Jun 2019
Age and gender bias in pedestrian detection algorithms
Age and gender bias in pedestrian detection algorithms
Martim Brandao
75
46
0
25 Jun 2019
Good Secretaries, Bad Truck Drivers? Occupational Gender Stereotypes in
  Sentiment Analysis
Good Secretaries, Bad Truck Drivers? Occupational Gender Stereotypes in Sentiment Analysis
J. Bhaskaran
Isha Bhallamudi
66
47
0
24 Jun 2019
Artificial Intelligence: the global landscape of ethics guidelines
Artificial Intelligence: the global landscape of ethics guidelines
Anna Jobin
M. Ienca
E. Vayena
121
1,684
0
24 Jun 2019
Language Modelling Makes Sense: Propagating Representations through
  WordNet for Full-Coverage Word Sense Disambiguation
Language Modelling Makes Sense: Propagating Representations through WordNet for Full-Coverage Word Sense Disambiguation
Daniel Loureiro
A. Jorge
85
138
0
24 Jun 2019
Mitigating Gender Bias in Natural Language Processing: Literature Review
Mitigating Gender Bias in Natural Language Processing: Literature Review
Tony Sun
Andrew Gaut
Shirlyn Tang
Yuxin Huang
Mai Elsherief
Jieyu Zhao
Diba Mirza
E. Belding-Royer
Kai-Wei Chang
William Yang Wang
AI4CE
141
563
0
21 Jun 2019
Considerations for the Interpretation of Bias Measures of Word
  Embeddings
Considerations for the Interpretation of Bias Measures of Word Embeddings
I. Mirzaev
Anthony Schulte
Michael D. Conover
Sam Shah
55
3
0
19 Jun 2019
Incorporating Priors with Feature Attribution on Text Classification
Incorporating Priors with Feature Attribution on Text Classification
Frederick Liu
Besim Avci
FAttFaML
111
120
0
19 Jun 2019
Measuring Bias in Contextualized Word Representations
Measuring Bias in Contextualized Word Representations
Keita Kurita
Nidhi Vyas
Ayush Pareek
A. Black
Yulia Tsvetkov
121
454
0
18 Jun 2019
Principled Frameworks for Evaluating Ethics in NLP Systems
Principled Frameworks for Evaluating Ethics in NLP Systems
Shrimai Prabhumoye
Elijah Mayfield
A. Black
60
7
0
14 Jun 2019
Conceptor Debiasing of Word Representations Evaluated on WEAT
Conceptor Debiasing of Word Representations Evaluated on WEAT
S. Karve
Lyle Ungar
João Sedoc
FaML
65
34
0
14 Jun 2019
Understanding artificial intelligence ethics and safety
Understanding artificial intelligence ethics and safety
David Leslie
FaMLAI4TS
74
363
0
11 Jun 2019
Unsupervised Discovery of Gendered Language through Latent-Variable
  Modeling
Unsupervised Discovery of Gendered Language through Latent-Variable Modeling
Alexander Miserlis Hoyle
Lawrence Wolf-Sonkin
Hanna M. Wallach
Isabelle Augenstein
Ryan Cotterell
74
52
0
11 Jun 2019
Counterfactual Data Augmentation for Mitigating Gender Stereotypes in
  Languages with Rich Morphology
Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology
Ran Zmigrod
Sabrina J. Mielke
Hanna M. Wallach
Ryan Cotterell
119
283
0
11 Jun 2019
Maximum Weighted Loss Discrepancy
Maximum Weighted Loss Discrepancy
Fereshte Khani
Aditi Raghunathan
Percy Liang
61
16
0
08 Jun 2019
Fair Division Without Disparate Impact
Fair Division Without Disparate Impact
A. Peysakhovich
Christian Kroer
64
10
0
06 Jun 2019
Variational Pretraining for Semi-supervised Text Classification
Variational Pretraining for Semi-supervised Text Classification
Suchin Gururangan
T. Dang
Dallas Card
Noah A. Smith
VLM
61
112
0
05 Jun 2019
Entity-Centric Contextual Affective Analysis
Entity-Centric Contextual Affective Analysis
Anjalie Field
Yulia Tsvetkov
91
30
0
05 Jun 2019
Tracing Antisemitic Language Through Diachronic Embedding Projections:
  France 1789-1914
Tracing Antisemitic Language Through Diachronic Embedding Projections: France 1789-1914
Rocco Tripodi
M. Warglien
S. Sullam
Deborah Paci
LLMSV
42
21
0
04 Jun 2019
Gender-preserving Debiasing for Pre-trained Word Embeddings
Gender-preserving Debiasing for Pre-trained Word Embeddings
Masahiro Kaneko
Danushka Bollegala
FaML
72
131
0
03 Jun 2019
Evaluating Gender Bias in Machine Translation
Evaluating Gender Bias in Machine Translation
Gabriel Stanovsky
Noah A. Smith
Luke Zettlemoyer
95
406
0
03 Jun 2019
Can We Derive Explicit and Implicit Bias from Corpus?
Can We Derive Explicit and Implicit Bias from Corpus?
Bo Wang
Baixiang Xue
A. Greenwald
31
2
0
31 May 2019
Reducing Gender Bias in Word-Level Language Models with a
  Gender-Equalizing Loss Function
Reducing Gender Bias in Word-Level Language Models with a Gender-Equalizing Loss Function
Yusu Qian
Urwa Muaz
Ben Zhang
J. Hyun
FaML
95
96
0
30 May 2019
Fairness and Missing Values
Fairness and Missing Values
Fernando Martínez-Plumed
Cesar Ferri
David Nieves
José Hernández-Orallo
91
28
0
29 May 2019
Racial Bias in Hate Speech and Abusive Language Detection Datasets
Racial Bias in Hate Speech and Abusive Language Detection Datasets
Thomas Davidson
Debasmita Bhattacharya
Ingmar Weber
138
459
0
29 May 2019
Fair is Better than Sensational:Man is to Doctor as Woman is to Doctor
Fair is Better than Sensational:Man is to Doctor as Woman is to Doctor
Malvina Nissim
Rik van Noord
Rob van der Goot
FaML
90
103
0
23 May 2019
Integrating Artificial Intelligence into Weapon Systems
Integrating Artificial Intelligence into Weapon Systems
Philip G. Feldman
Aaron Dant
Aaron K. Massey
29
12
0
10 May 2019
Proportionally Fair Clustering
Proportionally Fair Clustering
Xingyu Chen
Brandon Fain
Charles Lyu
Kamesh Munagala
FedMLFaML
128
144
0
09 May 2019
Auditing ImageNet: Towards a Model-driven Framework for Annotating
  Demographic Attributes of Large-Scale Image Datasets
Auditing ImageNet: Towards a Model-driven Framework for Annotating Demographic Attributes of Large-Scale Image Datasets
Chris Dulhanty
A. Wong
74
42
0
03 May 2019
Fairness-Aware Ranking in Search & Recommendation Systems with
  Application to LinkedIn Talent Search
Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search
S. Geyik
Stuart Ambler
K. Kenthapadi
115
384
0
30 Apr 2019
The role of artificial intelligence in achieving the Sustainable
  Development Goals
The role of artificial intelligence in achieving the Sustainable Development Goals
Ricardo Vinuesa
Hossein Azizpour
Iolanda Leite
Madeline Balaam
Virginia Dignum
S. Domisch
Anna Felländer
S. Langhans
Max Tegmark
F. F. Nerini
77
1,524
0
30 Apr 2019
Are We Consistently Biased? Multidimensional Analysis of Biases in
  Distributional Word Vectors
Are We Consistently Biased? Multidimensional Analysis of Biases in Distributional Word Vectors
Anne Lauscher
Goran Glavaš
98
55
0
26 Apr 2019
Detecting inter-sectional accuracy differences in driver drowsiness
  detection algorithms
Detecting inter-sectional accuracy differences in driver drowsiness detection algorithms
Mkhuseli Ngxande
J. Tapamo
Michael G. Burke
51
12
0
23 Apr 2019
Tracking and Improving Information in the Service of Fairness
Tracking and Improving Information in the Service of Fairness
Sumegha Garg
Michael P. Kim
Omer Reingold
FaML
53
13
0
22 Apr 2019
Evaluating the Underlying Gender Bias in Contextualized Word Embeddings
Evaluating the Underlying Gender Bias in Contextualized Word Embeddings
Christine Basta
Marta R. Costa-jussá
Noe Casas
77
195
0
18 Apr 2019
Analytical Methods for Interpretable Ultradense Word Embeddings
Analytical Methods for Interpretable Ultradense Word Embeddings
Philipp Dufter
Hinrich Schütze
70
25
0
18 Apr 2019
REPAIR: Removing Representation Bias by Dataset Resampling
REPAIR: Removing Representation Bias by Dataset Resampling
Yi Li
Nuno Vasconcelos
FaML
81
287
0
16 Apr 2019
What's in a Name? Reducing Bias in Bios without Access to Protected
  Attributes
What's in a Name? Reducing Bias in Bios without Access to Protected Attributes
Alexey Romanov
Maria De-Arteaga
Hanna M. Wallach
J. Chayes
C. Borgs
Alexandra Chouldechova
S. Geyik
K. Kenthapadi
Anna Rumshisky
Adam Tauman Kalai
80
81
0
10 Apr 2019
NLPR@SRPOL at SemEval-2019 Task 6 and Task 5: Linguistically enhanced
  deep learning offensive sentence classifier
NLPR@SRPOL at SemEval-2019 Task 6 and Task 5: Linguistically enhanced deep learning offensive sentence classifier
Alessandro Seganti
Helena Sobol
Iryna Orlova
Hannam Kim
J. Staniszewski
Tymoteusz Krumholc
Krystian Koziel
64
20
0
10 Apr 2019
Gender Bias in Contextualized Word Embeddings
Gender Bias in Contextualized Word Embeddings
Jieyu Zhao
Tianlu Wang
Mark Yatskar
Ryan Cotterell
Vicente Ordonez
Kai-Wei Chang
FaML
127
421
0
05 Apr 2019
Identifying and Reducing Gender Bias in Word-Level Language Models
Identifying and Reducing Gender Bias in Word-Level Language Models
Shikha Bordia
Samuel R. Bowman
FaML
136
329
0
05 Apr 2019
Black is to Criminal as Caucasian is to Police: Detecting and Removing
  Multiclass Bias in Word Embeddings
Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings
Thomas Manzini
Y. Lim
Yulia Tsvetkov
A. Black
FaML
111
308
0
03 Apr 2019
Temporal and Aspectual Entailment
Temporal and Aspectual Entailment
Thomas Kober
Sander Bijl de Vroe
Mark Steedman
61
16
0
02 Apr 2019
Deep Learning for Face Recognition: Pride or Prejudiced?
Deep Learning for Face Recognition: Pride or Prejudiced?
Shruti Nagpal
Maneet Singh
Richa Singh
Mayank Vatsa
FaML
94
75
0
02 Apr 2019
On Measuring Social Biases in Sentence Encoders
On Measuring Social Biases in Sentence Encoders
Chandler May
Alex Jinpeng Wang
Shikha Bordia
Samuel R. Bowman
Rachel Rudinger
131
607
0
25 Mar 2019
Previous
123...13141516
Next