ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1607.06520
  4. Cited By
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word
  Embeddings

Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

21 July 2016
Tolga Bolukbasi
Kai-Wei Chang
James Zou
Venkatesh Saligrama
Adam Kalai
    CVBMFaML
ArXiv (abs)PDFHTML

Papers citing "Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings"

50 / 779 papers shown
Title
Differentially Private Representation for NLP: Formal Guarantee and An
  Empirical Study on Privacy and Fairness
Differentially Private Representation for NLP: Formal Guarantee and An Empirical Study on Privacy and Fairness
Lingjuan Lyu
Xuanli He
Yitong Li
123
90
0
03 Oct 2020
Quantifying social organization and political polarization in online
  platforms
Quantifying social organization and political polarization in online platforms
Isaac Waller
Ashton Anderson
94
140
0
01 Oct 2020
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked
  Language Models
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Nikita Nangia
Clara Vania
Rasika Bhalerao
Samuel R. Bowman
161
691
0
30 Sep 2020
Why resampling outperforms reweighting for correcting sampling bias with
  stochastic gradients
Why resampling outperforms reweighting for correcting sampling bias with stochastic gradients
Jing An
Lexing Ying
Yuhua Zhu
114
40
0
28 Sep 2020
Mitigating Gender Bias for Neural Dialogue Generation with Adversarial
  Learning
Mitigating Gender Bias for Neural Dialogue Generation with Adversarial Learning
Haochen Liu
Wentao Wang
Yiqi Wang
Hui Liu
Zitao Liu
Jiliang Tang
86
71
0
28 Sep 2020
Fair Meta-Learning For Few-Shot Classification
Fair Meta-Learning For Few-Shot Classification
Chengli Zhao
Changbin Li
Jincheng Li
Feng Chen
FaML
65
26
0
23 Sep 2020
Probabilistic Machine Learning for Healthcare
Probabilistic Machine Learning for Healthcare
Irene Y. Chen
Shalmali Joshi
Marzyeh Ghassemi
Rajesh Ranganath
OOD
74
56
0
23 Sep 2020
Exploring the Linear Subspace Hypothesis in Gender Bias Mitigation
Exploring the Linear Subspace Hypothesis in Gender Bias Mitigation
Francisco Vargas
Ryan Cotterell
97
29
0
20 Sep 2020
Evaluating and Mitigating Bias in Image Classifiers: A Causal
  Perspective Using Counterfactuals
Evaluating and Mitigating Bias in Image Classifiers: A Causal Perspective Using Counterfactuals
Saloni Dash
V. Balasubramanian
Amit Sharma
CML
76
70
0
17 Sep 2020
GeDi: Generative Discriminator Guided Sequence Generation
GeDi: Generative Discriminator Guided Sequence Generation
Ben Krause
Akhilesh Deepak Gotmare
Bryan McCann
N. Keskar
Shafiq Joty
R. Socher
Nazneen Rajani
183
409
0
14 Sep 2020
Alfie: An Interactive Robot with a Moral Compass
Alfie: An Interactive Robot with a Moral Compass
Cigdem Turan
P. Schramowski
Constantin Rothkopf
Kristian Kersting
LM&Ro
22
0
0
11 Sep 2020
Investigating Gender Bias in BERT
Investigating Gender Bias in BERT
Rishabh Bhardwaj
Navonil Majumder
Soujanya Poria
85
108
0
10 Sep 2020
Learning Unbiased Representations via Rényi Minimization
Learning Unbiased Representations via Rényi Minimization
Vincent Grari
Oualid El Hajouji
Sylvain Lamprier
Marcin Detyniecki
FaML
66
21
0
07 Sep 2020
Adversarial Learning for Counterfactual Fairness
Adversarial Learning for Counterfactual Fairness
Vincent Grari
Sylvain Lamprier
Marcin Detyniecki
FaML
60
23
0
30 Aug 2020
Ethical behavior in humans and machines -- Evaluating training data
  quality for beneficial machine learning
Ethical behavior in humans and machines -- Evaluating training data quality for beneficial machine learning
Thilo Hagendorff
30
28
0
26 Aug 2020
Scruples: A Corpus of Community Ethical Judgments on 32,000 Real-Life
  Anecdotes
Scruples: A Corpus of Community Ethical Judgments on 32,000 Real-Life Anecdotes
Nicholas Lourie
Ronan Le Bras
Yejin Choi
82
125
0
20 Aug 2020
The Language Interpretability Tool: Extensible, Interactive
  Visualizations and Analysis for NLP Models
The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models
Ian Tenney
James Wexler
Jasmijn Bastings
Tolga Bolukbasi
Andy Coenen
...
Ellen Jiang
Mahima Pushkarna
Carey Radebaugh
Emily Reif
Ann Yuan
VLM
130
196
0
12 Aug 2020
Bias and Discrimination in AI: a cross-disciplinary perspective
Bias and Discrimination in AI: a cross-disciplinary perspective
Xavier Ferrer
Tom van Nuenen
Jose Such
Mark Coté
Natalia Criado
FaML
48
148
0
11 Aug 2020
Assessing Demographic Bias in Named Entity Recognition
Assessing Demographic Bias in Named Entity Recognition
Shubhanshu Mishra
Sijun He
Luca Belli
59
47
0
08 Aug 2020
Discovering and Categorising Language Biases in Reddit
Discovering and Categorising Language Biases in Reddit
Xavier Ferrer Aran
Tom van Nuenen
Jose Such
Natalia Criado
39
49
0
06 Aug 2020
Noise-Response Analysis of Deep Neural Networks Quantifies Robustness
  and Fingerprints Structural Malware
Noise-Response Analysis of Deep Neural Networks Quantifies Robustness and Fingerprints Structural Malware
N. Benjamin Erichson
D. Taylor
Qixuan Wu
Michael W. Mahoney
AAML
75
13
0
31 Jul 2020
Ethics of Artificial Intelligence in Surgery
Ethics of Artificial Intelligence in Surgery
Frank Rudzicz
Raeid Saqur
SyDa
10
13
0
28 Jul 2020
Defining and Evaluating Fair Natural Language Generation
Defining and Evaluating Fair Natural Language Generation
C. Yeo
A. Chen
80
24
0
28 Jul 2020
Word Embeddings: Stability and Semantic Change
Word Embeddings: Stability and Semantic Change
Lucas Rettenmeier
BDL
39
1
0
23 Jul 2020
Towards Debiasing Sentence Representations
Towards Debiasing Sentence Representations
Paul Pu Liang
Irene Li
Emily Zheng
Y. Lim
Ruslan Salakhutdinov
Louis-Philippe Morency
106
242
0
16 Jul 2020
Monitoring and explainability of models in production
Monitoring and explainability of models in production
Janis Klaise
A. V. Looveren
Clive Cox
G. Vacanti
Alexandru Coca
116
49
0
13 Jul 2020
Ensuring Fairness Beyond the Training Data
Ensuring Fairness Beyond the Training Data
Debmalya Mandal
Samuel Deng
Suman Jana
Jeannette M. Wing
Daniel J. Hsu
FaMLOOD
88
59
0
12 Jul 2020
Is Machine Learning Speaking my Language? A Critical Look at the
  NLP-Pipeline Across 8 Human Languages
Is Machine Learning Speaking my Language? A Critical Look at the NLP-Pipeline Across 8 Human Languages
Esma Wali
Yan Chen
Christopher Mahoney
Thomas Middleton
M. Babaeianjelodar
Mariama Njie
Jeanna Neefe Matthews
66
9
0
11 Jul 2020
Algorithmic Fairness in Education
Algorithmic Fairness in Education
René F. Kizilcec
Hansol Lee
FaML
120
126
0
10 Jul 2020
Cultural Cartography with Word Embeddings
Cultural Cartography with Word Embeddings
Dustin S. Stoltz
Marshall A. Taylor
57
39
0
09 Jul 2020
Automatic Detection of Sexist Statements Commonly Used at the Workplace
Automatic Detection of Sexist Statements Commonly Used at the Workplace
Dylan Grosz
Patricia Conde Céspedes
27
37
0
08 Jul 2020
README: REpresentation learning by fairness-Aware Disentangling MEthod
README: REpresentation learning by fairness-Aware Disentangling MEthod
Sungho Park
D. Kim
Sunhee Hwang
H. Byun
DRLCML
70
18
0
07 Jul 2020
Counterfactual Data Augmentation using Locally Factored Dynamics
Counterfactual Data Augmentation using Locally Factored Dynamics
Silviu Pitis
Elliot Creager
Animesh Garg
BDLOffRL
115
89
0
06 Jul 2020
Participation is not a Design Fix for Machine Learning
Participation is not a Design Fix for Machine Learning
Mona Sloane
Emanuel Moss
O. Awomolo
Laura Forlano
HAI
117
221
0
05 Jul 2020
Facts as Experts: Adaptable and Interpretable Neural Memory over
  Symbolic Knowledge
Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge
Pat Verga
Haitian Sun
Livio Baldini Soares
William W. Cohen
KELM
102
50
0
02 Jul 2020
OSCaR: Orthogonal Subspace Correction and Rectification of Biases in
  Word Embeddings
OSCaR: Orthogonal Subspace Correction and Rectification of Biases in Word Embeddings
Sunipa Dev
Tao Li
J. M. Phillips
Vivek Srikumar
121
55
0
30 Jun 2020
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and
  Architectures
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures
Julien Launay
Iacopo Poli
Franccois Boniface
Florent Krzakala
127
64
0
23 Jun 2020
MDR Cluster-Debias: A Nonlinear WordEmbedding Debiasing Pipeline
MDR Cluster-Debias: A Nonlinear WordEmbedding Debiasing Pipeline
Yuhao Du
K. Joseph
34
3
0
20 Jun 2020
Two Simple Ways to Learn Individual Fairness Metrics from Data
Two Simple Ways to Learn Individual Fairness Metrics from Data
Debarghya Mukherjee
Mikhail Yurochkin
Moulinath Banerjee
Yuekai Sun
FaML
92
97
0
19 Jun 2020
Fair clustering via equitable group representations
Fair clustering via equitable group representations
Mohsen Abbasi
Aditya Bhaskara
Suresh Venkatasubramanian
FaMLFedML
96
87
0
19 Jun 2020
Mitigating Gender Bias in Captioning Systems
Mitigating Gender Bias in Captioning Systems
Ruixiang Tang
Mengnan Du
Yuening Li
Zirui Liu
Na Zou
Helen Zhou
FaML
126
66
0
15 Jun 2020
Fairness in Forecasting and Learning Linear Dynamical Systems
Fairness in Forecasting and Learning Linear Dynamical Systems
Quan-Gen Zhou
Jakub Mareˇcek
Robert Shorten
AI4TS
96
7
0
12 Jun 2020
Group-Fair Online Allocation in Continuous Time
Group-Fair Online Allocation in Continuous Time
Semih Cayci
Swati Gupta
A. Eryilmaz
FaML
63
20
0
11 Jun 2020
Principles to Practices for Responsible AI: Closing the Gap
Principles to Practices for Responsible AI: Closing the Gap
Daniel S. Schiff
B. Rakova
A. Ayesh
Anat Fanti
M. Lennon
89
89
0
08 Jun 2020
Detecting Emergent Intersectional Biases: Contextualized Word Embeddings
  Contain a Distribution of Human-like Biases
Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases
W. Guo
Aylin Caliskan
59
245
0
06 Jun 2020
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Thomas Schnake
Oliver Eberle
Jonas Lederer
Shinichi Nakajima
Kristof T. Schütt
Klaus-Robert Muller
G. Montavon
128
224
0
05 Jun 2020
Nurse is Closer to Woman than Surgeon? Mitigating Gender-Biased
  Proximities in Word Embeddings
Nurse is Closer to Woman than Surgeon? Mitigating Gender-Biased Proximities in Word Embeddings
Vaibhav Kumar
Tenzin Singhay Bhotia
Vaibhav Kumar
Tanmoy Chakraborty
CVBM
84
46
0
02 Jun 2020
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
Su Lin Blodgett
Solon Barocas
Hal Daumé
Hanna M. Wallach
159
1,261
0
28 May 2020
MT-Adapted Datasheets for Datasets: Template and Repository
MT-Adapted Datasheets for Datasets: Template and Repository
Marta R. Costa-jussá
Roger Creus
Oriol Domingo
A. Domínguez
Miquel Escobar
Cayetana López
Marina Garcia
Margarita Geleta
72
12
0
27 May 2020
Examining Racial Bias in an Online Abuse Corpus with Structural Topic
  Modeling
Examining Racial Bias in an Online Abuse Corpus with Structural Topic Modeling
Thomas Davidson
Debasmita Bhattacharya
62
13
0
26 May 2020
Previous
123...101112...141516
Next