ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1607.06520
  4. Cited By
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word
  Embeddings

Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

21 July 2016
Tolga Bolukbasi
Kai-Wei Chang
James Zou
Venkatesh Saligrama
Adam Kalai
    CVBMFaML
ArXiv (abs)PDFHTML

Papers citing "Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings"

50 / 778 papers shown
Title
The Lifecycle of "Facts": A Survey of Social Bias in Knowledge Graphs
The Lifecycle of "Facts": A Survey of Social Bias in Knowledge Graphs
Angelie Kraft
Ricardo Usbeck
KELM
79
9
0
07 Oct 2022
Debiasing isn't enough! -- On the Effectiveness of Debiasing MLMs and
  their Social Biases in Downstream Tasks
Debiasing isn't enough! -- On the Effectiveness of Debiasing MLMs and their Social Biases in Downstream Tasks
Masahiro Kaneko
Danushka Bollegala
Naoaki Okazaki
102
46
0
06 Oct 2022
Re-contextualizing Fairness in NLP: The Case of India
Re-contextualizing Fairness in NLP: The Case of India
Shaily Bhatt
Sunipa Dev
Partha P. Talukdar
Shachi Dave
Vinodkumar Prabhakaran
110
61
0
25 Sep 2022
Bias at a Second Glance: A Deep Dive into Bias for German Educational
  Peer-Review Data Modeling
Bias at a Second Glance: A Deep Dive into Bias for German Educational Peer-Review Data Modeling
Thiemo Wambsganss
Vinitra Swamy
Roman Rietsche
Tanja Käser
114
8
0
21 Sep 2022
Closing the Gender Wage Gap: Adversarial Fairness in Job Recommendation
Closing the Gender Wage Gap: Adversarial Fairness in Job Recommendation
Clara Rus
Jeffrey Luppes
Harrie Oosterhuis
Gido Schoenmacker
FaML
83
12
0
20 Sep 2022
Mitigating Representation Bias in Action Recognition: Algorithms and
  Benchmarks
Mitigating Representation Bias in Action Recognition: Algorithms and Benchmarks
Haodong Duan
Yue Zhao
Kai-xiang Chen
Yu Xiong
Dahua Lin
36
7
0
20 Sep 2022
FairGBM: Gradient Boosting with Fairness Constraints
FairGBM: Gradient Boosting with Fairness Constraints
André F. Cruz
Catarina Belém
Sérgio Jesus
Joao Bravo
Pedro Saleiro
P. Bizarro
81
24
0
16 Sep 2022
Fair Inference for Discrete Latent Variable Models
Fair Inference for Discrete Latent Variable Models
Rashidul Islam
Shimei Pan
James R. Foulds
FaML
91
1
0
15 Sep 2022
"Es geht um Respekt, nicht um Technologie": Erkenntnisse aus einem
  Interessensgruppen-übergreifenden Workshop zu genderfairer Sprache und
  Sprachtechnologie
"Es geht um Respekt, nicht um Technologie": Erkenntnisse aus einem Interessensgruppen-übergreifenden Workshop zu genderfairer Sprache und Sprachtechnologie
Sabrina Burtscher
Katta Spiel
Lukas Daniel Klausner
Manuel Lardelli
Dagmar Gromann
54
7
0
06 Sep 2022
Debiasing Word Embeddings with Nonlinear Geometry
Debiasing Word Embeddings with Nonlinear Geometry
Lu Cheng
Nayoung Kim
Huan Liu
64
5
0
29 Aug 2022
Sustaining Fairness via Incremental Learning
Sustaining Fairness via Incremental Learning
Somnath Basu Roy Chowdhury
Snigdha Chaturvedi
FaMLCLL
82
4
0
25 Aug 2022
TESTSGD: Interpretable Testing of Neural Networks Against Subtle Group
  Discrimination
TESTSGD: Interpretable Testing of Neural Networks Against Subtle Group Discrimination
Mengdi Zhang
Jun Sun
Jingyi Wang
Bing-Jie Sun
69
15
0
24 Aug 2022
Using Large Language Models to Simulate Multiple Humans and Replicate
  Human Subject Studies
Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies
Gati Aher
RosaI. Arriaga
Adam Tauman Kalai
176
405
0
18 Aug 2022
Debiasing Gender Bias in Information Retrieval Models
Debiasing Gender Bias in Information Retrieval Models
Dhanasekar Sundararaman
Vivek Subramanian
58
2
0
02 Aug 2022
Gender bias in (non)-contextual clinical word embeddings for
  stereotypical medical categories
Gender bias in (non)-contextual clinical word embeddings for stereotypical medical categories
Gizem Sogancioglu
Fabian Mijsters
Amar van Uden
Jelle Peperzak
70
3
0
02 Aug 2022
Towards Fairness-Aware Multi-Objective Optimization
Towards Fairness-Aware Multi-Objective Optimization
Guo-Ding Yu
Lianbo Ma
W. Du
WenLi Du
Yaochu Jin
FaML
95
7
0
22 Jul 2022
Measuring and signing fairness as performance under multiple stakeholder
  distributions
Measuring and signing fairness as performance under multiple stakeholder distributions
David Lopez-Paz
Diane Bouchacourt
Levent Sagun
Nicolas Usunier
72
7
0
20 Jul 2022
A Multibias-mitigated and Sentiment Knowledge Enriched Transformer for
  Debiasing in Multimodal Conversational Emotion Recognition
A Multibias-mitigated and Sentiment Knowledge Enriched Transformer for Debiasing in Multimodal Conversational Emotion Recognition
Jinglin Wang
Fang Ma
Yazhou Zhang
Dawei Song
42
4
0
17 Jul 2022
A methodology to characterize bias and harmful stereotypes in natural
  language processing in Latin America
A methodology to characterize bias and harmful stereotypes in natural language processing in Latin America
Laura Alonso Alemany
Luciana Benotti
Hernán Maina
Lucía González
Mariela Rajngewerc
...
Guido Ivetta
Alexia Halvorsen
Amanda Rojo
M. Bordone
Beatriz Busaniche
76
3
0
14 Jul 2022
Diversity-aware social robots meet people: beyond context-aware embodied
  AI
Diversity-aware social robots meet people: beyond context-aware embodied AI
Carmine Tommaso Recchiuto
A. Sgorbissa
34
6
0
12 Jul 2022
FairDistillation: Mitigating Stereotyping in Language Models
FairDistillation: Mitigating Stereotyping in Language Models
Pieter Delobelle
Bettina Berendt
71
8
0
10 Jul 2022
Probing Classifiers are Unreliable for Concept Removal and Detection
Probing Classifiers are Unreliable for Concept Removal and Detection
Abhinav Kumar
Chenhao Tan
Amit Sharma
AAML
101
25
0
08 Jul 2022
Understanding Instance-Level Impact of Fairness Constraints
Understanding Instance-Level Impact of Fairness Constraints
Jialu Wang
Xinze Wang
Yang Liu
TDIFaML
108
34
0
30 Jun 2022
SoK: Content Moderation in Social Media, from Guidelines to Enforcement,
  and Research to Practice
SoK: Content Moderation in Social Media, from Guidelines to Enforcement, and Research to Practice
Mohit Singhal
Chen Ling
Pujan Paudel
Poojitha Thota
Nihal Kumarswamy
Gianluca Stringhini
Shirin Nilizadeh
156
33
0
29 Jun 2022
Is your model predicting the past?
Is your model predicting the past?
Moritz Hardt
Michael P. Kim
74
11
0
23 Jun 2022
Classification Utility, Fairness, and Compactness via Tunable
  Information Bottleneck and Rényi Measures
Classification Utility, Fairness, and Compactness via Tunable Information Bottleneck and Rényi Measures
A. Gronowski
William Paul
F. Alajaji
Bahman Gharesifard
Philippe Burlina
FaML
95
3
0
20 Jun 2022
Characteristics of Harmful Text: Towards Rigorous Benchmarking of
  Language Models
Characteristics of Harmful Text: Towards Rigorous Benchmarking of Language Models
Maribeth Rauh
John F. J. Mellor
J. Uesato
Po-Sen Huang
Johannes Welbl
...
Amelia Glaese
G. Irving
Iason Gabriel
William S. Isaac
Lisa Anne Hendricks
126
52
0
16 Jun 2022
Respect as a Lens for the Design of AI Systems
Respect as a Lens for the Design of AI Systems
W. Seymour
Max Van Kleek
Reuben Binns
Dave Murray-Rust
FaML
37
8
0
15 Jun 2022
Challenges in Applying Explainability Methods to Improve the Fairness of
  NLP Models
Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Esma Balkir
S. Kiritchenko
I. Nejadgholi
Kathleen C. Fraser
94
37
0
08 Jun 2022
How to Dissect a Muppet: The Structure of Transformer Embedding Spaces
How to Dissect a Muppet: The Structure of Transformer Embedding Spaces
Timothee Mickus
Denis Paperno
Mathieu Constant
91
23
0
07 Jun 2022
Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency,
  Syntax, and Semantics
Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics
Aylin Caliskan
Pimparkar Parth Ajay
Tessa E. S. Charlesworth
Robert Wolfe
M. Banaji
CVBMFaML
83
53
0
07 Jun 2022
[Re] Badder Seeds: Reproducing the Evaluation of Lexical Methods for
  Bias Measurement
[Re] Badder Seeds: Reproducing the Evaluation of Lexical Methods for Bias Measurement
Jille van der Togt
Lea Tiyavorabun
Matteo Rosati
Giulio Starace
35
0
0
03 Jun 2022
Measuring Gender Bias in Word Embeddings of Gendered Languages Requires
  Disentangling Grammatical Gender Signals
Measuring Gender Bias in Word Embeddings of Gendered Languages Requires Disentangling Grammatical Gender Signals
Shiva Omrani Sabbaghi
Aylin Caliskan
70
9
0
03 Jun 2022
What Changed? Investigating Debiasing Methods using Causal Mediation
  Analysis
What Changed? Investigating Debiasing Methods using Causal Mediation Analysis
Su-Ha Jeoung
Jana Diesner
CML
70
7
0
01 Jun 2022
Hollywood Identity Bias Dataset: A Context Oriented Bias Analysis of
  Movie Dialogues
Hollywood Identity Bias Dataset: A Context Oriented Bias Analysis of Movie Dialogues
Sandhya Singh
Prapti Roy
Nihar Ranjan Sahoo
Niteesh Mallela
Himanshu Gupta
...
Milind Savagaonkar
Nidhi
Roshni Ramnani
Anutosh Maitra
Shubhashis Sengupta
43
14
0
31 May 2022
Attention Flows for General Transformers
Attention Flows for General Transformers
Niklas Metzger
Christopher Hahn
Julian Siber
Frederik Schmitt
Bernd Finkbeiner
67
0
0
30 May 2022
StereoKG: Data-Driven Knowledge Graph Construction for Cultural
  Knowledge and Stereotypes
StereoKG: Data-Driven Knowledge Graph Construction for Cultural Knowledge and Stereotypes
Awantee V. Deshpande
Dana Ruiter
Marius Mosbach
Dietrich Klakow
42
12
0
27 May 2022
Toward Understanding Bias Correlations for Mitigation in NLP
Toward Understanding Bias Correlations for Mitigation in NLP
Lu Cheng
Suyu Ge
Huan Liu
72
9
0
24 May 2022
Looking for a Handsome Carpenter! Debiasing GPT-3 Job Advertisements
Looking for a Handsome Carpenter! Debiasing GPT-3 Job Advertisements
Conrad Borchers
Dalia Sara Gala
Ben Gilburt
Eduard Oravkin
Wilfried Bounsi
Yuki M. Asano
Hannah Rose Kirk
AI4CE
72
29
0
23 May 2022
How to keep text private? A systematic review of deep learning methods
  for privacy-preserving natural language processing
How to keep text private? A systematic review of deep learning methods for privacy-preserving natural language processing
Samuel Sousa
Roman Kern
PILMAILaw
79
46
0
20 May 2022
Gender Bias in Meta-Embeddings
Gender Bias in Meta-Embeddings
Masahiro Kaneko
Danushka Bollegala
Naoaki Okazaki
69
6
0
19 May 2022
"I'm sorry to hear that": Finding New Biases in Language Models with a
  Holistic Descriptor Dataset
"I'm sorry to hear that": Finding New Biases in Language Models with a Holistic Descriptor Dataset
Eric Michael Smith
Melissa Hall
Melanie Kambadur
Eleonora Presani
Adina Williams
110
143
0
18 May 2022
Dialog Inpainting: Turning Documents into Dialogs
Dialog Inpainting: Turning Documents into Dialogs
Zhuyun Dai
Arun Tejasvi Chaganty
Vincent Zhao
Aida Amini
Q. Rashid
Mike Green
Kelvin Guu
77
67
0
18 May 2022
Towards Debiasing Translation Artifacts
Towards Debiasing Translation Artifacts
Koel Dutta Chowdhury
Rricha Jalota
C. España-Bonet
Josef van Genabith
80
6
0
16 May 2022
Assessing the Limits of the Distributional Hypothesis in Semantic
  Spaces: Trait-based Relational Knowledge and the Impact of Co-occurrences
Assessing the Limits of the Distributional Hypothesis in Semantic Spaces: Trait-based Relational Knowledge and the Impact of Co-occurrences
Mark Anderson
Jose Camacho-Collados
69
0
0
16 May 2022
Heroes, Villains, and Victims, and GPT-3: Automated Extraction of
  Character Roles Without Training Data
Heroes, Villains, and Victims, and GPT-3: Automated Extraction of Character Roles Without Training Data
Dominik Stammbach
Maria Antoniak
Elliott Ash
212
34
0
16 May 2022
Trucks Don't Mean Trump: Diagnosing Human Error in Image Analysis
Trucks Don't Mean Trump: Diagnosing Human Error in Image Analysis
J.D. Zamfirescu-Pereira
Jerry Chen
Emily Wen
Allison Koenecke
N. Garg
Emma Pierson
63
6
0
15 May 2022
Naturalistic Causal Probing for Morpho-Syntax
Naturalistic Causal Probing for Morpho-Syntax
Afra Amini
Tiago Pimentel
Clara Meister
Ryan Cotterell
MILM
144
19
0
14 May 2022
Exploring How Machine Learning Practitioners (Try To) Use Fairness
  Toolkits
Exploring How Machine Learning Practitioners (Try To) Use Fairness Toolkits
Wesley Hanwen Deng
Manish Nagireddy
M. S. Lee
Jatinder Singh
Zhiwei Steven Wu
Kenneth Holstein
Haiyi Zhu
99
97
0
13 May 2022
Mitigating Gender Stereotypes in Hindi and Marathi
Mitigating Gender Stereotypes in Hindi and Marathi
Neeraja Kirtane
Tanvi Anand
61
8
0
12 May 2022
Previous
123456...141516
Next