ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1607.06520
  4. Cited By
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word
  Embeddings

Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

21 July 2016
Tolga Bolukbasi
Kai-Wei Chang
James Zou
Venkatesh Saligrama
Adam Kalai
    CVBMFaML
ArXiv (abs)PDFHTML

Papers citing "Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings"

50 / 778 papers shown
Title
Investigating Bias in Multilingual Language Models: Cross-Lingual
  Transfer of Debiasing Techniques
Investigating Bias in Multilingual Language Models: Cross-Lingual Transfer of Debiasing Techniques
Manon Reusens
Philipp Borchert
Margot Mieskes
Jochen De Weerdt
Bart Baesens
85
11
0
16 Oct 2023
Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender Perturbation over Fairytale Texts
Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender Perturbation over Fairytale Texts
Christina Chance
Da Yin
Dakuo Wang
Kai-Wei Chang
100
0
0
16 Oct 2023
"Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in
  LLM-Generated Reference Letters
"Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters
Yixin Wan
George Pu
Jiao Sun
Aparna Garimella
Kai-Wei Chang
Nanyun Peng
116
200
0
13 Oct 2023
Large language models can accurately predict searcher preferences
Large language models can accurately predict searcher preferences
Paul Thomas
S. Spielman
Nick Craswell
Bhaskar Mitra
ALMLRM
106
154
0
19 Sep 2023
Zero-Shot Robustification of Zero-Shot Models
Zero-Shot Robustification of Zero-Shot Models
Dyah Adila
Changho Shin
Lin Cai
Frederic Sala
88
20
0
08 Sep 2023
Mind vs. Mouth: On Measuring Re-judge Inconsistency of Social Bias in
  Large Language Models
Mind vs. Mouth: On Measuring Re-judge Inconsistency of Social Bias in Large Language Models
Yachao Zhao
Bo Wang
Dongming Zhao
Kun Huang
Yan Wang
Ruifang He
Yuexian Hou
97
4
0
24 Aug 2023
Unmasking Nationality Bias: A Study of Human Perception of Nationalities
  in AI-Generated Articles
Unmasking Nationality Bias: A Study of Human Perception of Nationalities in AI-Generated Articles
Pranav Narayanan Venkit
Sanjana Gautam
Ruchi Panchanadikar
Tingting Huang
Shomir Wilson
60
23
0
08 Aug 2023
Balanced Face Dataset: Guiding StyleGAN to Generate Labeled Synthetic
  Face Image Dataset for Underrepresented Group
Balanced Face Dataset: Guiding StyleGAN to Generate Labeled Synthetic Face Image Dataset for Underrepresented Group
Kidist Amde Mekonnen
CVBM
47
2
0
07 Aug 2023
A Geometric Notion of Causal Probing
A Geometric Notion of Causal Probing
Clément Guerner
Anej Svete
Tianyu Liu
Alex Warstadt
Ryan Cotterell
LLMSV
111
17
0
27 Jul 2023
Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language
  Models
Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models
Somayeh Ghanbarzadeh
Yan-ping Huang
Hamid Palangi
R. C. Moreno
Hamed Khanpour
70
12
0
20 Jul 2023
Building Socio-culturally Inclusive Stereotype Resources with Community
  Engagement
Building Socio-culturally Inclusive Stereotype Resources with Community Engagement
Sunipa Dev
J. Goyal
Dinesh Tewari
Shachi Dave
Vinodkumar Prabhakaran
69
26
0
20 Jul 2023
National Origin Discrimination in Deep-learning-powered Automated Resume
  Screening
National Origin Discrimination in Deep-learning-powered Automated Resume Screening
Changhao Nai
Kuangzheng Li
Haibing Lu
52
4
0
13 Jul 2023
Learning to Generate Equitable Text in Dialogue from Biased Training
  Data
Learning to Generate Equitable Text in Dialogue from Biased Training Data
Anthony Sicilia
Malihe Alikhani
114
16
0
10 Jul 2023
FFPDG: Fast, Fair and Private Data Generation
FFPDG: Fast, Fair and Private Data Generation
Weijie Xu
Jinjin Zhao
Francis Iannacci
Bo Wang
83
12
0
30 Jun 2023
Insufficiently Justified Disparate Impact: A New Criterion for Subgroup
  Fairness
Insufficiently Justified Disparate Impact: A New Criterion for Subgroup Fairness
Neil Menghani
E. McFowland
Daniel B. Neill
64
0
0
19 Jun 2023
LEACE: Perfect linear concept erasure in closed form
LEACE: Perfect linear concept erasure in closed form
Nora Belrose
David Schneider-Joseph
Shauli Ravfogel
Ryan Cotterell
Edward Raff
Stella Biderman
KELMMU
182
120
0
06 Jun 2023
ReFACT: Updating Text-to-Image Models by Editing the Text Encoder
ReFACT: Updating Text-to-Image Models by Editing the Text Encoder
Dana Arad
Hadas Orgad
Yonatan Belinkov
KELM
142
19
0
01 Jun 2023
Backpack Language Models
Backpack Language Models
John Hewitt
John Thickstun
Christopher D. Manning
Percy Liang
KELM
101
16
0
26 May 2023
Rethinking Diversity in Deep Neural Network Testing
Rethinking Diversity in Deep Neural Network Testing
Zi Wang
Jihye Choi
Keming Wang
S. Jha
47
2
0
25 May 2023
Is Your Model "MADD"? A Novel Metric to Evaluate Algorithmic Fairness
  for Predictive Student Models
Is Your Model "MADD"? A Novel Metric to Evaluate Algorithmic Fairness for Predictive Student Models
M. Verger
Sébastien Lallé
F. Bouchet
Vanda Luengo
66
11
0
24 May 2023
Trade-Offs Between Fairness and Privacy in Language Modeling
Trade-Offs Between Fairness and Privacy in Language Modeling
Cleo Matzken
Steffen Eger
Ivan Habernal
SILM
115
6
0
24 May 2023
SenteCon: Leveraging Lexicons to Learn Human-Interpretable Language
  Representations
SenteCon: Leveraging Lexicons to Learn Human-Interpretable Language Representations
Victoria Lin
Louis-Philippe Morency
MILM
63
1
0
24 May 2023
Debiasing should be Good and Bad: Measuring the Consistency of Debiasing
  Techniques in Language Models
Debiasing should be Good and Bad: Measuring the Consistency of Debiasing Techniques in Language Models
Robert D Morabito
Jad Kabbara
Ali Emami
42
7
0
23 May 2023
Out-of-Distribution Generalization in Text Classification: Past,
  Present, and Future
Out-of-Distribution Generalization in Text Classification: Past, Present, and Future
Linyi Yang
Yangqiu Song
Xuan Ren
Chenyang Lyu
Yidong Wang
Lingqiao Liu
Jindong Wang
Jennifer Foster
Yue Zhang
OOD
131
3
0
23 May 2023
Counterfactual Augmentation for Multimodal Learning Under Presentation
  Bias
Counterfactual Augmentation for Multimodal Learning Under Presentation Bias
Victoria Lin
Louis-Philippe Morency
Dimitrios Dimitriadis
Srinagesh Sharma
CML
61
1
0
23 May 2023
Assessing Linguistic Generalisation in Language Models: A Dataset for
  Brazilian Portuguese
Assessing Linguistic Generalisation in Language Models: A Dataset for Brazilian Portuguese
Rodrigo Wilkens
Leonardo Zilio
Aline Villavicencio
58
1
0
23 May 2023
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language
  Models
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language Models
Seraphina Goldfarb-Tarrant
Eddie L. Ungless
Esma Balkir
Su Lin Blodgett
100
10
0
22 May 2023
Comparing Biases and the Impact of Multilingual Training across Multiple
  Languages
Comparing Biases and the Impact of Multilingual Training across Multiple Languages
Sharon Levy
Neha Ann John
Ling Liu
Yogarshi Vyas
Jie Ma
Yoshinari Fujinuma
Miguel Ballesteros
Vittorio Castelli
Dan Roth
85
28
0
18 May 2023
Smiling Women Pitching Down: Auditing Representational and
  Presentational Gender Biases in Image Generative AI
Smiling Women Pitching Down: Auditing Representational and Presentational Gender Biases in Image Generative AI
Luhang Sun
Mian Wei
Yibing Sun
Yoo Ji Suh
Liwei Shen
Sijia Yang
75
61
0
17 May 2023
ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores
  Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource
  Languages
ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource Languages
Sourojit Ghosh
Aylin Caliskan
86
81
0
17 May 2023
Shielded Representations: Protecting Sensitive Attributes Through
  Iterative Gradient-Based Projection
Shielded Representations: Protecting Sensitive Attributes Through Iterative Gradient-Based Projection
Shadi Iskander
Kira Radinsky
Yonatan Belinkov
152
19
0
17 May 2023
"I'm fully who I am": Towards Centering Transgender and Non-Binary
  Voices to Measure Biases in Open Language Generation
"I'm fully who I am": Towards Centering Transgender and Non-Binary Voices to Measure Biases in Open Language Generation
Anaelia Ovalle
Palash Goyal
Jwala Dhamala
Zachary Jaggers
Kai-Wei Chang
Aram Galstyan
R. Zemel
Rahul Gupta
97
73
0
17 May 2023
Constructing Holistic Measures for Social Biases in Masked Language Models
Yang Liu
Yuexian Hou
27
0
0
12 May 2023
Surfacing Biases in Large Language Models using Contrastive Input
  Decoding
Surfacing Biases in Large Language Models using Contrastive Input Decoding
G. Yona
Or Honovich
Itay Laish
Roee Aharoni
65
12
0
12 May 2023
Semantic Space Grounded Weighted Decoding for Multi-Attribute
  Controllable Dialogue Generation
Semantic Space Grounded Weighted Decoding for Multi-Attribute Controllable Dialogue Generation
Zhiling Zhang
Mengyue Wu
Ke Zhu
AI4CE
76
1
0
04 May 2023
Fairness in AI Systems: Mitigating gender bias from language-vision
  models
Fairness in AI Systems: Mitigating gender bias from language-vision models
Lavisha Aggarwal
Shruti Bhargava
70
5
0
03 May 2023
Patterns of gender-specializing query reformulation
Patterns of gender-specializing query reformulation
Amifa Raj
Bhaskar Mitra
Nick Craswell
Michael D. Ekstrand
25
1
0
25 Apr 2023
Individual Fairness in Bayesian Neural Networks
Individual Fairness in Bayesian Neural Networks
Alice Doherty
Matthew Wicker
Luca Laurenti
A. Patané
147
5
0
21 Apr 2023
Measuring Normative and Descriptive Biases in Language Models Using
  Census Data
Measuring Normative and Descriptive Biases in Language Models Using Census Data
Samia Touileb
Lilja Ovrelid
Erik Velldal
88
4
0
12 Apr 2023
Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language
  Models
Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models
Emilio Ferrara
SILM
121
264
0
07 Apr 2023
Philosophical Foundations of GeoAI: Exploring Sustainability, Diversity,
  and Bias in GeoAI and Spatial Data Science
Philosophical Foundations of GeoAI: Exploring Sustainability, Diversity, and Bias in GeoAI and Spatial Data Science
K. Janowicz
AI4CE
75
18
0
27 Mar 2023
Fairness: from the ethical principle to the practice of Machine Learning
  development as an ongoing agreement with stakeholders
Fairness: from the ethical principle to the practice of Machine Learning development as an ongoing agreement with stakeholders
Georgina Curto
F. Comim
FaML
37
1
0
22 Mar 2023
Neuro-symbolic Commonsense Social Reasoning
Neuro-symbolic Commonsense Social Reasoning
David Chanin
Anthony Hunter
NAILRM
61
3
0
14 Mar 2023
Contributing to Accessibility Datasets: Reflections on Sharing Study
  Data by Blind People
Contributing to Accessibility Datasets: Reflections on Sharing Study Data by Blind People
Rie Kamikubo
Kyungjun Lee
Hernisa Kacorri
63
9
0
09 Mar 2023
Bias, diversity, and challenges to fairness in classification and
  automated text analysis. From libraries to AI and back
Bias, diversity, and challenges to fairness in classification and automated text analysis. From libraries to AI and back
Bettina Berendt
Özgür Karadeniz
Sercan Kiyak
Stefan Mertens
L. d’Haenens
FaML
33
0
0
07 Mar 2023
A Challenging Benchmark for Low-Resource Learning
A Challenging Benchmark for Low-Resource Learning
Yudong Wang
Chang Ma
Qingxiu Dong
Lingpeng Kong
Jingjing Xu
81
4
0
07 Mar 2023
Fairness in Language Models Beyond English: Gaps and Challenges
Fairness in Language Models Beyond English: Gaps and Challenges
Krithika Ramesh
Sunayana Sitaram
Monojit Choudhury
78
23
0
24 Feb 2023
Poisoning Web-Scale Training Datasets is Practical
Poisoning Web-Scale Training Datasets is Practical
Nicholas Carlini
Matthew Jagielski
Christopher A. Choquette-Choo
Daniel Paleka
Will Pearce
Hyrum S. Anderson
Andreas Terzis
Kurt Thomas
Florian Tramèr
SILM
131
204
0
20 Feb 2023
Evaluating Trade-offs in Computer Vision Between Attribute Privacy,
  Fairness and Utility
Evaluating Trade-offs in Computer Vision Between Attribute Privacy, Fairness and Utility
William Paul
P. Mathew
F. Alajaji
Philippe Burlina
38
2
0
15 Feb 2023
Conversational AI-Powered Design: ChatGPT as Designer, User, and Product
Conversational AI-Powered Design: ChatGPT as Designer, User, and Product
A. Kocaballi
65
40
0
15 Feb 2023
Previous
123456...141516
Next