ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1607.06520
  4. Cited By
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word
  Embeddings

Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

21 July 2016
Tolga Bolukbasi
Kai-Wei Chang
James Zou
Venkatesh Saligrama
Adam Kalai
    CVBMFaML
ArXiv (abs)PDFHTML

Papers citing "Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings"

50 / 770 papers shown
Title
Are Bias Evaluation Methods Biased ?
Are Bias Evaluation Methods Biased ?
Lina Berrayana
Sean Rooney
Luis Garces-Erice
Ioana Giurgiu
ELM
30
0
0
20 Jun 2025
Measuring (a Sufficient) World Model in LLMs: A Variance Decomposition Framework
Measuring (a Sufficient) World Model in LLMs: A Variance Decomposition Framework
Nadav Kunievsky
James A. Evans
20
0
0
19 Jun 2025
Dense SAE Latents Are Features, Not Bugs
Dense SAE Latents Are Features, Not Bugs
Xiaoqing Sun
Alessandro Stolfo
Joshua Engels
Ben Wu
Senthooran Rajamanoharan
Mrinmaya Sachan
Max Tegmark
61
0
0
18 Jun 2025
Gender Inclusivity Fairness Index (GIFI): A Multilevel Framework for Evaluating Gender Diversity in Large Language Models
Gender Inclusivity Fairness Index (GIFI): A Multilevel Framework for Evaluating Gender Diversity in Large Language Models
Zhengyang Shan
Emily Ruth Diana
Jiawei Zhou
43
0
0
18 Jun 2025
Probabilistic Aggregation and Targeted Embedding Optimization for Collective Moral Reasoning in Large Language Models
Probabilistic Aggregation and Targeted Embedding Optimization for Collective Moral Reasoning in Large Language Models
Chenchen Yuan
Zheyu Zhang
Shuo Yang
Bardh Prenkaj
Gjergji Kasneci
36
0
0
17 Jun 2025
Robustly Improving LLM Fairness in Realistic Settings via Interpretability
Robustly Improving LLM Fairness in Realistic Settings via Interpretability
Adam Karvonen
Samuel Marks
116
0
0
12 Jun 2025
Preserving Task-Relevant Information Under Linear Concept Removal
Preserving Task-Relevant Information Under Linear Concept Removal
Floris Holstege
Shauli Ravfogel
Bram Wouters
KELM
130
0
0
12 Jun 2025
Gender Bias in English-to-Greek Machine Translation
Gender Bias in English-to-Greek Machine Translation
Eleni Gkovedarou
Joke Daems
Luna De Bruyne
77
0
0
11 Jun 2025
Evaluating LLM-corrupted Crowdsourcing Data Without Ground Truth
Evaluating LLM-corrupted Crowdsourcing Data Without Ground Truth
Yichi Zhang
Jinlong Pang
Zhaowei Zhu
Yang Liu
31
1
0
08 Jun 2025
Dissecting Bias in LLMs: A Mechanistic Interpretability Perspective
Bhavik Chandna
Zubair Bashir
Procheta Sen
89
0
0
05 Jun 2025
BiMa: Towards Biases Mitigation for Text-Video Retrieval via Scene Element Guidance
BiMa: Towards Biases Mitigation for Text-Video Retrieval via Scene Element Guidance
Huy Le
Nhat Chung
Tung Kieu
A. Nguyen
Ngan Le
72
0
0
04 Jun 2025
Beyond Linear Steering: Unified Multi-Attribute Control for Language Models
Beyond Linear Steering: Unified Multi-Attribute Control for Language Models
Narmeen Oozeer
Luke Marks
Fazl Barez
Amirali Abdullah
LLMSV
33
0
0
30 May 2025
Precise In-Parameter Concept Erasure in Large Language Models
Precise In-Parameter Concept Erasure in Large Language Models
Yoav Gur-Arieh
Clara Suslik
Yihuai Hong
Fazl Barez
Mor Geva
KELMMU
101
0
0
28 May 2025
DECASTE: Unveiling Caste Stereotypes in Large Language Models through Multi-Dimensional Bias Analysis
DECASTE: Unveiling Caste Stereotypes in Large Language Models through Multi-Dimensional Bias Analysis
Prashanth Vijayaraghavan
Soroush Vosoughi
Lamogha Chizor
Raya Horesh
Rogerio Abreu de Paula
Ehsan Degan
Vandana Mukherjee
81
1
0
20 May 2025
Inter(sectional) Alia(s): Ambiguity in Voice Agent Identity via Intersectional Japanese Self-Referents
Inter(sectional) Alia(s): Ambiguity in Voice Agent Identity via Intersectional Japanese Self-Referents
Takao Fujii
Katie Seaborn
Madeleine Steeds
Jun Kato
12
1
0
20 May 2025
Wisdom from Diversity: Bias Mitigation Through Hybrid Human-LLM Crowds
Wisdom from Diversity: Bias Mitigation Through Hybrid Human-LLM Crowds
Axel Abels
Tom Lenaerts
56
0
0
18 May 2025
Do Large Language Models know who did what to whom?
Do Large Language Models know who did what to whom?
Joseph M. Denning
Xiaohan
Bryor Snefjella
Idan A. Blank
266
1
0
23 Apr 2025
FairSteer: Inference Time Debiasing for LLMs with Dynamic Activation Steering
FairSteer: Inference Time Debiasing for LLMs with Dynamic Activation Steering
Yongbin Li
Zhiting Fan
Ruizhe Chen
Xiaotang Gai
Luqi Gong
Yan Zhang
Zuozhu Liu
LLMSV
99
6
0
20 Apr 2025
Flexibility of German gas-fired generation: evidence from clustering empirical operation
Flexibility of German gas-fired generation: evidence from clustering empirical operation
Chiara Fusar Bassini
Alice Lixuan Xu
Jorge Sanchez Canales
Lion Hirth
Lynn H. Kaack
36
0
0
14 Apr 2025
GraphSeg: Segmented 3D Representations via Graph Edge Addition and Contraction
GraphSeg: Segmented 3D Representations via Graph Edge Addition and Contraction
Haozhan Tang
Tianyi Zhang
Oliver Kroemer
Matthew Johnson-Roberson
Weiming Zhi
3DPC
95
0
0
04 Apr 2025
Overcoming Sparsity Artifacts in Crosscoders to Interpret Chat-Tuning
Overcoming Sparsity Artifacts in Crosscoders to Interpret Chat-Tuning
Julian Minder
Clement Dumas
Caden Juang
Bilal Chugtai
Neel Nanda
172
1
0
03 Apr 2025
Evaluating how LLM annotations represent diverse views on contentious topics
Evaluating how LLM annotations represent diverse views on contentious topics
Megan A. Brown
Shubham Atreja
Libby Hemphill
Patrick Y. Wu
433
0
0
29 Mar 2025
Calibrating Verbal Uncertainty as a Linear Feature to Reduce Hallucinations
Calibrating Verbal Uncertainty as a Linear Feature to Reduce Hallucinations
Ziwei Ji
L. Yu
Yeskendir Koishekenov
Yejin Bang
Anthony Hartshorn
Alan Schelten
Cheng Zhang
Pascale Fung
Nicola Cancedda
114
6
0
18 Mar 2025
Who Relies More on World Knowledge and Bias for Syntactic Ambiguity Resolution: Humans or LLMs?
Who Relies More on World Knowledge and Bias for Syntactic Ambiguity Resolution: Humans or LLMs?
So Young Lee
Russell Scheinberg
Amber Shore
Ameeta Agrawal
79
1
0
13 Mar 2025
Gender Encoding Patterns in Pretrained Language Model Representations
Mahdi Zakizadeh
Mohammad Taher Pilehvar
216
0
0
09 Mar 2025
Language Models Predict Empathy Gaps Between Social In-groups and Out-groups
Yu Hou
Hal Daumé III
Rachel Rudinger
100
4
0
02 Mar 2025
Investigating the Relationship Between Debiasing and Artifact Removal using Saliency Maps
Investigating the Relationship Between Debiasing and Artifact Removal using Saliency Maps
Lukasz Sztukiewicz
Ignacy Stepka
Michał Wiliński
Jerzy Stefanowski
146
0
0
28 Feb 2025
The Call for Socially Aware Language Technologies
The Call for Socially Aware Language Technologies
Diyi Yang
Dirk Hovy
David Jurgens
Barbara Plank
VLM
153
12
0
24 Feb 2025
Encoding Inequity: Examining Demographic Bias in LLM-Driven Robot Caregiving
Raj Korpan
75
0
0
24 Feb 2025
Is Free Self-Alignment Possible?
Is Free Self-Alignment Possible?
Dyah Adila
Changho Shin
Yijing Zhang
Frederic Sala
MoMe
201
2
0
24 Feb 2025
Intrinsic Model Weaknesses: How Priming Attacks Unveil Vulnerabilities in Large Language Models
Intrinsic Model Weaknesses: How Priming Attacks Unveil Vulnerabilities in Large Language Models
Yuyi Huang
Runzhe Zhan
Derek F. Wong
Lidia S. Chao
Ailin Tao
AAMLSyDaELM
72
0
0
23 Feb 2025
Local Differences, Global Lessons: Insights from Organisation Policies for International Legislation
Lucie-Aimée Kaffee
Pepa Atanasova
Anna Rogers
86
1
0
19 Feb 2025
Designing Role Vectors to Improve LLM Inference Behaviour
Designing Role Vectors to Improve LLM Inference Behaviour
Daniele Potertì
Andrea Seveso
Fabio Mercorio
LLMSV
98
1
0
17 Feb 2025
Man Made Language Models? Evaluating LLMs' Perpetuation of Masculine Generics Bias
Man Made Language Models? Evaluating LLMs' Perpetuation of Masculine Generics Bias
Enzo Doyen
Amalia Todirascu
94
1
0
14 Feb 2025
Fine-Tuned LLMs are "Time Capsules" for Tracking Societal Bias Through Books
Fine-Tuned LLMs are "Time Capsules" for Tracking Societal Bias Through Books
Sangmitra Madhusudan
Robert D Morabito
Skye Reid
Nikta Gohari Sadr
Ali Emami
148
1
0
07 Feb 2025
The Energy Loss Phenomenon in RLHF: A New Perspective on Mitigating Reward Hacking
The Energy Loss Phenomenon in RLHF: A New Perspective on Mitigating Reward Hacking
Yuchun Miao
Sen Zhang
Liang Ding
Yuqi Zhang
Lefei Zhang
Dacheng Tao
178
5
0
31 Jan 2025
Large language models can replicate cross-cultural differences in personality
Large language models can replicate cross-cultural differences in personality
Paweł Niszczota
Mateusz Janczak
Michał Misiak
109
8
0
28 Jan 2025
Musical ethnocentrism in Large Language Models
Musical ethnocentrism in Large Language Models
Anna Kruspe
72
0
0
23 Jan 2025
Playing Devil's Advocate: Unmasking Toxicity and Vulnerabilities in Large Vision-Language Models
Playing Devil's Advocate: Unmasking Toxicity and Vulnerabilities in Large Vision-Language Models
Abdulkadir Erol
Trilok Padhi
Agnik Saha
Ugur Kursuncu
Mehmet Emin Aktas
97
2
0
17 Jan 2025
The Goofus & Gallant Story Corpus for Practical Value Alignment
The Goofus & Gallant Story Corpus for Practical Value Alignment
Md Sultan al Nahian
Tasmia Tasrin
Spencer Frazier
Mark O. Riedl
Brent Harrison
120
0
0
17 Jan 2025
Foundation Models at Work: Fine-Tuning for Fairness in Algorithmic Hiring
Foundation Models at Work: Fine-Tuning for Fairness in Algorithmic Hiring
Buse Sibel Korkmaz
Rahul Nair
Elizabeth M. Daly
Evangelos Anagnostopoulos
Christos Varytimidis
Antonio del Rio Chanona
78
0
0
13 Jan 2025
Scaling Down Semantic Leakage: Investigating Associative Bias in Smaller Language Models
Scaling Down Semantic Leakage: Investigating Associative Bias in Smaller Language Models
Veronika Smilga
97
0
0
11 Jan 2025
Gender-Neutral Large Language Models for Medical Applications: Reducing Bias in PubMed Abstracts
Gender-Neutral Large Language Models for Medical Applications: Reducing Bias in PubMed Abstracts
Elizabeth Schaefer
Kirk Roberts
186
0
0
10 Jan 2025
Generalizing Trust: Weak-to-Strong Trustworthiness in Language Models
Martin Pawelczyk
Lillian Sun
Zhenting Qi
Aounon Kumar
Himabindu Lakkaraju
162
3
0
03 Jan 2025
Social Science Is Necessary for Operationalizing Socially Responsible Foundation Models
Social Science Is Necessary for Operationalizing Socially Responsible Foundation Models
Adam Davies
Elisa Nguyen
Michael Simeone
Erik Johnston
Martin Gubri
212
0
0
20 Dec 2024
Cross-Lingual Transfer of Debiasing and Detoxification in Multilingual LLMs: An Extensive Investigation
Cross-Lingual Transfer of Debiasing and Detoxification in Multilingual LLMs: An Extensive Investigation
Vera Neplenbroek
Arianna Bisazza
Raquel Fernández
213
1
0
18 Dec 2024
The Evolution and Future Perspectives of Artificial Intelligence Generated Content
The Evolution and Future Perspectives of Artificial Intelligence Generated Content
Chengzhang Zhu
Luobin Cui
Ying Tang
Jiacun Wang
161
1
0
02 Dec 2024
Perception of Visual Content: Differences Between Humans and Foundation Models
Perception of Visual Content: Differences Between Humans and Foundation Models
Nardiena A. Pratama
Shaoyang Fan
Gianluca Demartini
VLM
165
0
0
28 Nov 2024
Learning to Ask: Conversational Product Search via Representation
  Learning
Learning to Ask: Conversational Product Search via Representation Learning
Jie Zou
Jimmy Xiangji Huang
Zhaochun Ren
Evangelos Kanoulas
155
14
0
18 Nov 2024
Controllable Context Sensitivity and the Knob Behind It
Controllable Context Sensitivity and the Knob Behind It
Julian Minder
Kevin Du
Niklas Stoehr
Giovanni Monea
Chris Wendler
Robert West
Ryan Cotterell
KELM
150
6
0
11 Nov 2024
1234...141516
Next