ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.04571
  4. Cited By
Counterfactual Data Augmentation for Mitigating Gender Stereotypes in
  Languages with Rich Morphology
v1v2v3 (latest)

Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology

Annual Meeting of the Association for Computational Linguistics (ACL), 2019
11 June 2019
Ran Zmigrod
Sabrina J. Mielke
Hanna M. Wallach
Robert Bamler
ArXiv (abs)PDFHTML

Papers citing "Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology"

50 / 201 papers shown
An Empirical Survey of Model Merging Algorithms for Social Bias Mitigation
An Empirical Survey of Model Merging Algorithms for Social Bias Mitigation
Daiki Shirafuji
Tatsuhiko Saito
Yasutomo Kimura
MoMeKELM
165
0
0
02 Dec 2025
No Free Lunch in Language Model Bias Mitigation? Targeted Bias Reduction Can Exacerbate Unmitigated LLM Biases
No Free Lunch in Language Model Bias Mitigation? Targeted Bias Reduction Can Exacerbate Unmitigated LLM Biases
Shireen Chand
Faith Baca
Emilio Ferrara
151
2
0
23 Nov 2025
TriCon-Fair: Triplet Contrastive Learning for Mitigating Social Bias in Pre-trained Language Models
TriCon-Fair: Triplet Contrastive Learning for Mitigating Social Bias in Pre-trained Language Models
Chong Lyu
Lin Li
Shiqing Wu
Jingling Yuan
182
0
0
02 Nov 2025
Can SAEs reveal and mitigate racial biases of LLMs in healthcare?
Can SAEs reveal and mitigate racial biases of LLMs in healthcare?
Hiba Ahsan
Byron C. Wallace
LLMSV
246
1
0
31 Oct 2025
FairImagen: Post-Processing for Bias Mitigation in Text-to-Image Models
FairImagen: Post-Processing for Bias Mitigation in Text-to-Image Models
Zihao Fu
Ryan Brown
Shun Shao
Kai Rawal
Eoin Delaney
Chris Russell
162
2
0
24 Oct 2025
Investigating Thinking Behaviours of Reasoning-Based Language Models for Social Bias Mitigation
Investigating Thinking Behaviours of Reasoning-Based Language Models for Social Bias Mitigation
Guoqing Luo
Iffat Maab
Lili Mou
Junichi Yamagishi
LRM
216
2
0
20 Oct 2025
Mitigating Biases in Language Models via Bias Unlearning
Mitigating Biases in Language Models via Bias Unlearning
Dianqing Liu
Yi Liu
Guoqing Jin
Zhendong Mao
MU
247
3
0
30 Sep 2025
BiasFreeBench: a Benchmark for Mitigating Bias in Large Language Model Responses
BiasFreeBench: a Benchmark for Mitigating Bias in Large Language Model Responses
Xin Xu
Xunzhi He
Churan Zhi
Ruizhe Chen
Julian McAuley
Zexue He
141
2
0
30 Sep 2025
Bridging Fairness and Explainability: Can Input-Based Explanations Promote Fairness in Hate Speech Detection?
Bridging Fairness and Explainability: Can Input-Based Explanations Promote Fairness in Hate Speech Detection?
Yifan Wang
Mayank Jobanputra
Ji-Ung Lee
Soyoung Oh
Isabel Valera
Vera Demberg
280
1
0
26 Sep 2025
Fair-GPTQ: Bias-Aware Quantization for Large Language Models
Fair-GPTQ: Bias-Aware Quantization for Large Language Models
Irina Proskurina
Guillaume Metzler
Julien Velcin
MQ
256
0
0
18 Sep 2025
C3DE: Causal-Aware Collaborative Neural Controlled Differential Equation for Long-Term Urban Crowd Flow Prediction
C3DE: Causal-Aware Collaborative Neural Controlled Differential Equation for Long-Term Urban Crowd Flow Prediction
Yuting Liu
Qiang Zhou
Hanzhe Li
Chenqi Gong
Jingjing Gu
200
0
0
15 Sep 2025
CoBA: Counterbias Text Augmentation for Mitigating Various Spurious Correlations via Semantic Triples
CoBA: Counterbias Text Augmentation for Mitigating Various Spurious Correlations via Semantic Triples
Kyohoon Jin
Juhwan Choi
Jungmin Yun
Junho Lee
Soojin Jang
Youngbin Kim
240
0
0
26 Aug 2025
Freeze and Reveal: Exposing Modality Bias in Vision-Language Models
Freeze and Reveal: Exposing Modality Bias in Vision-Language Models
Vivek Hruday Kavuri
Vysishtya Karanam
Venkata Jahnavi Venkamsetty
Kriti Madumadukala
Lakshmipathi Balaji Darur
Ponnurangam Kumaraguru
VLM
175
1
0
10 Aug 2025
I Think, Therefore I Am Under-Qualified? A Benchmark for Evaluating Linguistic Shibboleth Detection in LLM Hiring Evaluations
I Think, Therefore I Am Under-Qualified? A Benchmark for Evaluating Linguistic Shibboleth Detection in LLM Hiring Evaluations
Julia Kharchenko
Tanya Roosta
Aman Chadha
Chirag Shah
146
1
0
06 Aug 2025
Gender Bias in English-to-Greek Machine Translation
Gender Bias in English-to-Greek Machine Translation
Eleni Gkovedarou
Joke Daems
Luna De Bruyne
316
2
0
11 Jun 2025
Flattery, Fluff, and Fog: Diagnosing and Mitigating Idiosyncratic Biases in Preference Models
Flattery, Fluff, and Fog: Diagnosing and Mitigating Idiosyncratic Biases in Preference Models
Anirudh Bharadwaj
Chaitanya Malaviya
Nitish Joshi
Mark Yatskar
480
7
0
05 Jun 2025
Dissecting Bias in LLMs: A Mechanistic Interpretability Perspective
Dissecting Bias in LLMs: A Mechanistic Interpretability Perspective
Bhavik Chandna
Zubair Bashir
Procheta Sen
343
10
0
05 Jun 2025
Paying Alignment Tax with Contrastive Learning
Paying Alignment Tax with Contrastive Learning
Buse Sibel Korkmaz
Rahul Nair
Elizabeth M. Daly
Antonio del Rio Chanona
354
2
0
25 May 2025
A Survey on Stereotype Detection in Natural Language Processing
A Survey on Stereotype Detection in Natural Language ProcessingACM Computing Surveys (ACM Comput. Surv.), 2025
Alessandra Teresa Cignarella
Anastasia Giachanou
Els Lefever
271
0
0
23 May 2025
Mitigating Gender Bias via Fostering Exploratory Thinking in LLMs
Mitigating Gender Bias via Fostering Exploratory Thinking in LLMs
Kangda Wei
Hasnat Md Abdullah
Ruihong Huang
350
2
0
22 May 2025
From n-gram to Attention: How Model Architectures Learn and Propagate Bias in Language Modeling
From n-gram to Attention: How Model Architectures Learn and Propagate Bias in Language Modeling
Mohsinul Kabir
Tasfia Tahsin
Sophia Ananiadou
KELMAI4CE
460
2
0
18 May 2025
FairSteer: Inference Time Debiasing for LLMs with Dynamic Activation Steering
FairSteer: Inference Time Debiasing for LLMs with Dynamic Activation SteeringAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Yongbin Li
Zhiting Fan
Ruizhe Chen
Xiaotang Gai
Luqi Gong
Yan Zhang
Zuozhu Liu
LLMSV
406
26
0
20 Apr 2025
Enforcing Consistency and Fairness in Multi-level Hierarchical Classification with a Mask-based Output Layer
Enforcing Consistency and Fairness in Multi-level Hierarchical Classification with a Mask-based Output Layer
Shijing Chen
Shoaib Jameel
Mohamed Reda Bouadjenek
Feilong Tang
Usman Naseem
Basem Suleiman
Hakim Hacid
Flora D. Salim
Imran Razzak
325
0
0
19 Mar 2025
BiasEdit: Debiasing Stereotyped Language Models via Model Editing
BiasEdit: Debiasing Stereotyped Language Models via Model Editing
Xin Xu
Wei Xu
Ningyu Zhang
Julian McAuley
KELM
393
13
0
11 Mar 2025
Mitigating Bias in RAG: Controlling the Embedder
Mitigating Bias in RAG: Controlling the EmbedderAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Taeyoun Kim
Jacob Mitchell Springer
Aditi Raghunathan
Maarten Sap
359
7
0
24 Feb 2025
Man Made Language Models? Evaluating LLMs' Perpetuation of Masculine Generics Bias
Man Made Language Models? Evaluating LLMs' Perpetuation of Masculine Generics Bias
Enzo Doyen
Amalia Todirascu
399
3
0
14 Feb 2025
Bias Vector: Mitigating Biases in Language Models with Task Arithmetic
  Approach
Bias Vector: Mitigating Biases in Language Models with Task Arithmetic ApproachInternational Conference on Computational Linguistics (COLING), 2024
Daiki Shirafuji
Makoto Takenaka
Shinya Taguchi
LLMAG
312
12
0
16 Dec 2024
Improving LLM Group Fairness on Tabular Data via In-Context Learning
Improving LLM Group Fairness on Tabular Data via In-Context Learning
Valeriia Cherepanova
Chia-Jung Lee
Nil-Jana Akpinar
Riccardo Fogliato
Martín Bertrán
Michael Kearns
James Zou
LMTD
513
5
0
05 Dec 2024
Exploring Accuracy-Fairness Trade-off in Large Language Models
Exploring Accuracy-Fairness Trade-off in Large Language Models
Qingquan Zhang
Qiqi Duan
Bo Yuan
Yuhui Shi
Qingbin Liu
345
3
0
21 Nov 2024
Bias in Large Language Models: Origin, Evaluation, and Mitigation
Yufei Guo
Muzhe Guo
Juntao Su
Zhou Yang
Mengqiu Zhu
Hongfei Li
Mengyang Qiu
Shuo Shuo Liu
AILaw
405
97
0
16 Nov 2024
Causality for Large Language Models
Causality for Large Language Models
Anpeng Wu
Kun Kuang
Minqin Zhu
Yingrong Wang
Yujia Zheng
Kairong Han
Yangqiu Song
Guangyi Chen
Leilei Gan
Kun Zhang
LRM
398
20
0
20 Oct 2024
Mitigating Gender Bias in Code Large Language Models via Model Editing
Mitigating Gender Bias in Code Large Language Models via Model Editing
Zhan Qin
Haochuan Wang
Zecheng Wang
Deyuan Liu
Cunhang Fan
Zhao Lv
Zhiying Tu
Dianhui Chu
Dianbo Sui
KELM
245
3
0
10 Oct 2024
Collapsed Language Models Promote Fairness
Collapsed Language Models Promote FairnessInternational Conference on Learning Representations (ICLR), 2024
Jingxuan Xu
Wuyang Chen
Linyi Li
Yao Zhao
Yunchao Wei
527
1
0
06 Oct 2024
REFINE-LM: Mitigating Language Model Stereotypes via Reinforcement
  Learning
REFINE-LM: Mitigating Language Model Stereotypes via Reinforcement LearningEuropean Conference on Artificial Intelligence (ECAI), 2024
Rameez Qureshi
Naim Es-Sebbani
Luis Galárraga
Yvette Graham
Miguel Couceiro
Zied Bouraoui
252
1
0
18 Aug 2024
Decoding Biases: Automated Methods and LLM Judges for Gender Bias
  Detection in Language Models
Decoding Biases: Automated Methods and LLM Judges for Gender Bias Detection in Language Models
Shachi H. Kumar
Saurav Sahay
Sahisnu Mazumder
Eda Okur
R. Manuvinakurike
Nicole Beckage
Hsuan Su
Hung-yi Lee
L. Nachman
ELM
301
35
0
07 Aug 2024
FairFlow: An Automated Approach to Model-based Counterfactual Data
  Augmentation For NLP
FairFlow: An Automated Approach to Model-based Counterfactual Data Augmentation For NLP
E. Tokpo
T. Calders
218
6
0
23 Jul 2024
Are Large Language Models Really Bias-Free? Jailbreak Prompts for Assessing Adversarial Robustness to Bias Elicitation
Are Large Language Models Really Bias-Free? Jailbreak Prompts for Assessing Adversarial Robustness to Bias Elicitation
Riccardo Cantini
Giada Cosenza
A. Orsino
Domenico Talia
AAML
467
16
0
11 Jul 2024
Do Multilingual Large Language Models Mitigate Stereotype Bias?
Do Multilingual Large Language Models Mitigate Stereotype Bias?
Shangrui Nie
Michael Fromm
Charles F Welch
Rebekka Görge
Akbar Karimi
Joan Plepi
Nazia Afsan Mowmita
Nicolas Flores-Herr
Mehdi Ali
Lucie Flek
379
16
0
08 Jul 2024
Social Bias Evaluation for Large Language Models Requires Prompt
  Variations
Social Bias Evaluation for Large Language Models Requires Prompt Variations
Rem Hida
Masahiro Kaneko
Naoaki Okazaki
361
37
0
03 Jul 2024
OxonFair: A Flexible Toolkit for Algorithmic Fairness
OxonFair: A Flexible Toolkit for Algorithmic Fairness
Eoin Delaney
Zihao Fu
Sandra Wachter
Brent Mittelstadt
Chris Russell
FaML
299
10
0
30 Jun 2024
Does Context Help Mitigate Gender Bias in Neural Machine Translation?
Does Context Help Mitigate Gender Bias in Neural Machine Translation?
Harritxu Gete
Thierry Etchegoyhen
235
1
0
18 Jun 2024
Disentangling Dialect from Social Bias via Multitask Learning to Improve
  Fairness
Disentangling Dialect from Social Bias via Multitask Learning to Improve FairnessAnnual Meeting of the Association for Computational Linguistics (ACL), 2024
Maximilian Spliethover
Sai Nikhil Menon
Henning Wachsmuth
269
5
0
14 Jun 2024
On the Intrinsic Self-Correction Capability of LLMs: Uncertainty and
  Latent Concept
On the Intrinsic Self-Correction Capability of LLMs: Uncertainty and Latent Concept
Guangliang Liu
Haitao Mao
Bochuan Cao
Zhiyu Xue
K. Johnson
Shucheng Zhou
Rongrong Wang
LRM
276
19
0
04 Jun 2024
Large Language Models as Recommender Systems: A Study of Popularity Bias
Large Language Models as Recommender Systems: A Study of Popularity Bias
Jan Malte Lichtenberg
Alexander K. Buchholz
Pola Schwöbel
380
19
0
03 Jun 2024
Low-rank finetuning for LLMs: A fairness perspective
Low-rank finetuning for LLMs: A fairness perspective
Saswat Das
Marco Romanelli
Cuong Tran
Zarreen Reza
B. Kailkhura
Ferdinando Fioretto
250
5
0
28 May 2024
Large Language Model Bias Mitigation from the Perspective of Knowledge
  Editing
Large Language Model Bias Mitigation from the Perspective of Knowledge Editing
Ruizhe Chen
Yichen Li
Zikai Xiao
Zuo-Qiang Liu
KELM
402
19
0
15 May 2024
Hire Me or Not? Examining Language Model's Behavior with Occupation Attributes
Hire Me or Not? Examining Language Model's Behavior with Occupation AttributesInternational Conference on Computational Linguistics (COLING), 2024
Damin Zhang
Yi Zhang
Geetanjali Bihani
Julia Taylor Rayz
530
4
0
06 May 2024
Prompting Techniques for Reducing Social Bias in LLMs through System 1 and System 2 Cognitive Processes
Prompting Techniques for Reducing Social Bias in LLMs through System 1 and System 2 Cognitive Processes
M. Kamruzzaman
Gene Louis Kim
555
38
0
26 Apr 2024
A Cause-Effect Look at Alleviating Hallucination of Knowledge-grounded
  Dialogue Generation
A Cause-Effect Look at Alleviating Hallucination of Knowledge-grounded Dialogue GenerationInternational Conference on Language Resources and Evaluation (LREC), 2024
Jifan Yu
Xiaohan Zhang
Yifan Xu
Xuanyu Lei
Zijun Yao
Jing Zhang
Lei Hou
Juanzi Li
HILM
338
5
0
04 Apr 2024
Towards detecting unanticipated bias in Large Language Models
Towards detecting unanticipated bias in Large Language Models
Anna Kruspe
282
9
0
03 Apr 2024
12345
Next
Page 1 of 5