ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.09369
  4. Cited By
On Measuring and Mitigating Biased Inferences of Word Embeddings
v1v2v3 (latest)

On Measuring and Mitigating Biased Inferences of Word Embeddings

AAAI Conference on Artificial Intelligence (AAAI), 2019
25 August 2019
Sunipa Dev
Tao Li
J. M. Phillips
Vivek Srikumar
ArXiv (abs)PDFHTML

Papers citing "On Measuring and Mitigating Biased Inferences of Word Embeddings"

50 / 108 papers shown
Exploring and Mitigating Gender Bias in Encoder-Based Transformer Models
Exploring and Mitigating Gender Bias in Encoder-Based Transformer Models
Ariyan Hossain
Khondokar Mohammad Ahanaf Hannan
Rakinul Haque
Nowreen Tarannum Rafa
Humayra Musarrat
Shoaib Ahmed Dipu
Farig Yousuf Sadeque
148
0
0
01 Nov 2025
Once Is Enough: Lightweight DiT-Based Video Virtual Try-On via One-Time Garment Appearance Injection
Once Is Enough: Lightweight DiT-Based Video Virtual Try-On via One-Time Garment Appearance Injection
Yanjie Pan
Qingdong He
Lidong Wang
Bo Peng
Mingmin Chi
DiffMVGen
146
0
0
09 Oct 2025
PolBiX: Detecting LLMs' Political Bias in Fact-Checking through X-phemisms
PolBiX: Detecting LLMs' Political Bias in Fact-Checking through X-phemisms
Charlott Jakob
David Harbecke
Patrick Parschan
Pia Wenzel Neves
Vera Schmitt
171
3
0
18 Sep 2025
Investigating Intersectional Bias in Large Language Models using Confidence Disparities in Coreference Resolution
Investigating Intersectional Bias in Large Language Models using Confidence Disparities in Coreference Resolution
Falaah Arif Khan
N. Sivakumar
Yinong Oliver Wang
Katherine Metcalf
Cezanne Camacho
B. Theobald
Luca Zappella
N. Apostoloff
201
2
0
09 Aug 2025
I Think, Therefore I Am Under-Qualified? A Benchmark for Evaluating Linguistic Shibboleth Detection in LLM Hiring Evaluations
I Think, Therefore I Am Under-Qualified? A Benchmark for Evaluating Linguistic Shibboleth Detection in LLM Hiring Evaluations
Julia Kharchenko
Tanya Roosta
Aman Chadha
Chirag Shah
143
1
0
06 Aug 2025
Ming-Omni: A Unified Multimodal Model for Perception and Generation
Ming-Omni: A Unified Multimodal Model for Perception and Generation
Inclusion AI
Biao Gong
Cheng Zou
C. Zheng
Chunluan Zhou
...
Zipeng Feng
Zhijiang Fang
Zhihao Qiu
Ziyuan Huang
Z. He
MLLMAuLLM
259
4
0
11 Jun 2025
Benchmarking and Pushing the Multi-Bias Elimination Boundary of LLMs via Causal Effect Estimation-guided Debiasing
Benchmarking and Pushing the Multi-Bias Elimination Boundary of LLMs via Causal Effect Estimation-guided Debiasing
Zhouhao Sun
Zhiyuan Kan
Xiao Ding
Li Du
Yang Zhao
Bing Qin
Ting Liu
542
0
0
22 May 2025
GenderBench: Evaluation Suite for Gender Biases in LLMs
GenderBench: Evaluation Suite for Gender Biases in LLMs
Matúš Pikuliak
401
0
0
17 May 2025
Assumed Identities: Quantifying Gender Bias in Machine Translation of Gender-Ambiguous Occupational Terms
Assumed Identities: Quantifying Gender Bias in Machine Translation of Gender-Ambiguous Occupational Terms
Orfeas Menis Mastromichalakis
Giorgos Filandrianos
Maria Symeonaki
Giorgos Stamou
435
0
0
06 Mar 2025
Analyzing the Safety of Japanese Large Language Models in Stereotype-Triggering Prompts
Analyzing the Safety of Japanese Large Language Models in Stereotype-Triggering Prompts
Akito Nakanishi
Yukie Sano
Geng Liu
Francesco Pierri
395
1
0
03 Mar 2025
Language Models Predict Empathy Gaps Between Social In-groups and Out-groups
Language Models Predict Empathy Gaps Between Social In-groups and Out-groupsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2025
Yu Hou
Hal Daumé III
Rachel Rudinger
444
7
0
02 Mar 2025
Structured Reasoning for Fairness: A Multi-Agent Approach to Bias Detection in Textual Data
Structured Reasoning for Fairness: A Multi-Agent Approach to Bias Detection in Textual Data
Tianyi Huang
Elsa Fan
376
3
0
01 Mar 2025
Do LLMs exhibit demographic parity in responses to queries about Human Rights?
Do LLMs exhibit demographic parity in responses to queries about Human Rights?
Rafiya Javed
Jackie Kay
David Yanni
Abdullah Zaini
Anushe Sheikh
Maribeth Rauh
Iason Gabriel
Laura Weidinger
356
4
0
26 Feb 2025
Bias in Large Language Models: Origin, Evaluation, and Mitigation
Yufei Guo
Muzhe Guo
Juntao Su
Zhou Yang
Mengqiu Zhu
Hongfei Li
Mengyang Qiu
Shuo Shuo Liu
AILaw
403
94
0
16 Nov 2024
Large Language Models Still Exhibit Bias in Long Text
Large Language Models Still Exhibit Bias in Long TextAnnual Meeting of the Association for Computational Linguistics (ACL), 2024
Wonje Jeung
Dongjae Jeon
Ashkan Yousefpour
Jonghyun Choi
ALM
570
13
0
23 Oct 2024
LLMs are Biased Teachers: Evaluating LLM Bias in Personalized Education
LLMs are Biased Teachers: Evaluating LLM Bias in Personalized EducationNorth American Chapter of the Association for Computational Linguistics (NAACL), 2024
Iain Xie Weissburg
Sathvika Anand
Sharon Levy
Haewon Jeong
588
35
0
17 Oct 2024
BenchmarkCards: Standardized Documentation for Large Language Model Benchmarks
BenchmarkCards: Standardized Documentation for Large Language Model Benchmarks
Anna Sokol
Elizabeth M. Daly
Michael Hind
David Piorkowski
Xiangliang Zhang
Nuno Moniz
Nitesh Chawla
390
0
0
16 Oct 2024
Collapsed Language Models Promote Fairness
Collapsed Language Models Promote FairnessInternational Conference on Learning Representations (ICLR), 2024
Jingxuan Xu
Wuyang Chen
Linyi Li
Yao Zhao
Yunchao Wei
521
1
0
06 Oct 2024
Fairness Definitions in Language Models Explained
Fairness Definitions in Language Models Explained
Thang Viet Doan
Zhibo Chu
Sribala Vidyadhari Chinta
Wenbin Zhang
ALM
453
21
0
26 Jul 2024
Do Generative AI Models Output Harm while Representing Non-Western
  Cultures: Evidence from A Community-Centered Approach
Do Generative AI Models Output Harm while Representing Non-Western Cultures: Evidence from A Community-Centered Approach
Sourojit Ghosh
Pranav Narayanan Venkit
Sanjana Gautam
Shomir Wilson
Aylin Caliskan
346
51
0
20 Jul 2024
Exploring Changes in Nation Perception with Nationality-Assigned
  Personas in LLMs
Exploring Changes in Nation Perception with Nationality-Assigned Personas in LLMs
M. Kamruzzaman
Gene Louis Kim
244
11
0
20 Jun 2024
Evaluating Short-Term Temporal Fluctuations of Social Biases in Social
  Media Data and Masked Language Models
Evaluating Short-Term Temporal Fluctuations of Social Biases in Social Media Data and Masked Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2024
Yi Zhou
Danushka Bollegala
Jose Camacho-Collados
298
2
0
19 Jun 2024
GECOBench: A Gender-Controlled Text Dataset and Benchmark for Quantifying Biases in Explanations
GECOBench: A Gender-Controlled Text Dataset and Benchmark for Quantifying Biases in Explanations
Rick Wilming
Artur Dox
Hjalmar Schulz
Marta Oliveira
Benedict Clark
Stefan Haufe
329
6
0
17 Jun 2024
Exploring Safety-Utility Trade-Offs in Personalized Language Models
Exploring Safety-Utility Trade-Offs in Personalized Language Models
Anvesh Rao Vijjini
Somnath Basu Roy Chowdhury
Snigdha Chaturvedi
644
22
0
17 Jun 2024
Deconstructing The Ethics of Large Language Models from Long-standing
  Issues to New-emerging Dilemmas
Deconstructing The Ethics of Large Language Models from Long-standing Issues to New-emerging Dilemmas
Chengyuan Deng
Yiqun Duan
Xin Jin
Heng Chang
Yijun Tian
...
Kuofeng Gao
Sihong He
Jun Zhuang
Lu Cheng
Haohan Wang
AILaw
341
28
0
08 Jun 2024
Ask LLMs Directly, "What shapes your bias?": Measuring Social Bias in
  Large Language Models
Ask LLMs Directly, "What shapes your bias?": Measuring Social Bias in Large Language Models
Jisu Shin
Hoyun Song
Huije Lee
Soyeong Jeong
Jong C. Park
376
16
0
06 Jun 2024
Large Language Model Bias Mitigation from the Perspective of Knowledge
  Editing
Large Language Model Bias Mitigation from the Perspective of Knowledge Editing
Ruizhe Chen
Yichen Li
Zikai Xiao
Zuo-Qiang Liu
KELM
402
19
0
15 May 2024
Believing Anthropomorphism: Examining the Role of Anthropomorphic Cues
  on Trust in Large Language Models
Believing Anthropomorphism: Examining the Role of Anthropomorphic Cues on Trust in Large Language Models
Michelle Cohn
Mahima Pushkarna
Gbolahan O. Olanubi
Joseph M. Moran
Daniel Padgett
Zion Mengesha
Courtney Heldreth
229
51
0
09 May 2024
Hire Me or Not? Examining Language Model's Behavior with Occupation Attributes
Hire Me or Not? Examining Language Model's Behavior with Occupation AttributesInternational Conference on Computational Linguistics (COLING), 2024
Damin Zhang
Yi Zhang
Geetanjali Bihani
Julia Taylor Rayz
528
4
0
06 May 2024
GeniL: A Multilingual Dataset on Generalizing Language
GeniL: A Multilingual Dataset on Generalizing Language
Aida Mostafazadeh Davani
S. Gubbi
Sunipa Dev
Shachi Dave
Vinodkumar Prabhakaran
305
3
0
08 Apr 2024
Fairness in Large Language Models: A Taxonomic Survey
Fairness in Large Language Models: A Taxonomic Survey
Zhibo Chu
Sribala Vidyadhari Chinta
Wenbin Zhang
AILaw
292
99
0
31 Mar 2024
Projective Methods for Mitigating Gender Bias in Pre-trained Language
  Models
Projective Methods for Mitigating Gender Bias in Pre-trained Language Models
Hillary Dawkins
I. Nejadgholi
Daniel Gillis
J. McCuaig
206
1
0
27 Mar 2024
Evaluating Unsupervised Dimensionality Reduction Methods for Pretrained
  Sentence Embeddings
Evaluating Unsupervised Dimensionality Reduction Methods for Pretrained Sentence Embeddings
Gaifan Zhang
Yi Zhou
Danushka Bollegala
275
11
0
20 Mar 2024
Detecting Bias in Large Language Models: Fine-tuned KcBERT
Detecting Bias in Large Language Models: Fine-tuned KcBERT
J. K. Lee
T. M. Chung
250
4
0
16 Mar 2024
Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias
  in Factual Knowledge Extraction
Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias in Factual Knowledge ExtractionInternational Conference on Language Resources and Evaluation (LREC), 2024
Ziyang Xu
Keqin Peng
Liang Ding
Dacheng Tao
Xiliang Lu
262
25
0
15 Mar 2024
SeeGULL Multilingual: a Dataset of Geo-Culturally Situated Stereotypes
SeeGULL Multilingual: a Dataset of Geo-Culturally Situated Stereotypes
Mukul Bhutani
Kevin Robinson
Vinodkumar Prabhakaran
Shachi Dave
Sunipa Dev
430
30
0
08 Mar 2024
"Flex Tape Can't Fix That": Bias and Misinformation in Edited Language
  Models
"Flex Tape Can't Fix That": Bias and Misinformation in Edited Language Models
Karina Halevy
Anna Sotnikova
Badr AlKhamissi
Syrielle Montariol
Antoine Bosselut
KELM
367
9
0
29 Feb 2024
A Note on Bias to Complete
A Note on Bias to Complete
Jia Xu
Mona Diab
327
2
0
18 Feb 2024
From Prejudice to Parity: A New Approach to Debiasing Large Language
  Model Word Embeddings
From Prejudice to Parity: A New Approach to Debiasing Large Language Model Word Embeddings
Aishik Rakshit
Smriti Singh
Shuvam Keshari
Arijit Ghosh Chowdhury
Vinija Jain
Vasu Sharma
271
9
0
18 Feb 2024
MAFIA: Multi-Adapter Fused Inclusive LanguAge Models
MAFIA: Multi-Adapter Fused Inclusive LanguAge ModelsConference of the European Chapter of the Association for Computational Linguistics (EACL), 2024
Prachi Jain
Ashutosh Sathe
Varun Gumma
Kabir Ahuja
Sunayana Sitaram
317
1
0
12 Feb 2024
Measuring Machine Learning Harms from Stereotypes Requires Understanding Who Is Harmed by Which Errors in What Ways
Measuring Machine Learning Harms from Stereotypes Requires Understanding Who Is Harmed by Which Errors in What WaysConference on Fairness, Accountability and Transparency (FAccT), 2024
Angelina Wang
Xuechunzi Bai
Solon Barocas
Su Lin Blodgett
FaML
220
5
0
06 Feb 2024
Tackling Bias in Pre-trained Language Models: Current Trends and
  Under-represented Societies
Tackling Bias in Pre-trained Language Models: Current Trends and Under-represented Societies
Vithya Yogarajan
Gillian Dobbie
Te Taka Keegan
R. Neuwirth
ALM
407
18
0
03 Dec 2023
PEFTDebias : Capturing debiasing information using PEFTs
PEFTDebias : Capturing debiasing information using PEFTsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Sumit Agarwal
Aditya Srikanth Veerubhotla
Srijan Bansal
284
4
0
01 Dec 2023
Measuring and Improving Attentiveness to Partial Inputs with
  Counterfactuals
Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals
Yanai Elazar
Bhargavi Paranjape
Hao Peng
Sarah Wiegreffe
Khyathi Raghavi
Vivek Srikumar
Sameer Singh
Noah A. Smith
AAMLOOD
256
0
0
16 Nov 2023
Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs
Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs
Shashank Gupta
Vaishnavi Shrivastava
Ameet Deshpande
Ashwin Kalyan
Peter Clark
Ashish Sabharwal
Tushar Khot
544
199
0
08 Nov 2023
A Predictive Factor Analysis of Social Biases and Task-Performance in
  Pretrained Masked Language Models
A Predictive Factor Analysis of Social Biases and Task-Performance in Pretrained Masked Language Models
Yi Zhou
Jose Camacho-Collados
Danushka Bollegala
494
8
0
19 Oct 2023
Co$^2$PT: Mitigating Bias in Pre-trained Language Models through
  Counterfactual Contrastive Prompt Tuning
Co2^22PT: Mitigating Bias in Pre-trained Language Models through Counterfactual Contrastive Prompt TuningConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Xiangjue Dong
Ziwei Zhu
Zhuoer Wang
Maria Teleki
James Caverlee
352
20
0
19 Oct 2023
Mitigating Bias for Question Answering Models by Tracking Bias Influence
Mitigating Bias for Question Answering Models by Tracking Bias InfluenceNorth American Chapter of the Association for Computational Linguistics (NAACL), 2023
Mingyu Derek Ma
Jiun-Yu Kao
Arpit Gupta
Yu-Hsiang Lin
Wenbo Zhao
Tagyoung Chung
Wei Wang
Kai-Wei Chang
Nanyun Peng
316
9
0
13 Oct 2023
Large Language Model Alignment: A Survey
Large Language Model Alignment: A Survey
Shangda Wu
Renren Jin
Yufei Huang
Chuang Liu
Weilong Dong
Zishan Guo
Xinwei Wu
Yan Liu
Deyi Xiong
LM&MA
447
303
0
26 Sep 2023
Evaluating Gender Bias of Pre-trained Language Models in Natural
  Language Inference by Considering All Labels
Evaluating Gender Bias of Pre-trained Language Models in Natural Language Inference by Considering All LabelsInternational Conference on Language Resources and Evaluation (LREC), 2023
Panatchakorn Anantaprayoon
Masahiro Kaneko
Naoaki Okazaki
465
23
0
18 Sep 2023
123
Next
Page 1 of 3