ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.10486
  4. Cited By
Does Gender Matter? Towards Fairness in Dialogue Systems
v1v2v3 (latest)

Does Gender Matter? Towards Fairness in Dialogue Systems

International Conference on Computational Linguistics (COLING), 2019
16 October 2019
Haochen Liu
Jamell Dacon
Wenqi Fan
Hui Liu
Zitao Liu
Shucheng Zhou
ArXiv (abs)PDFHTML

Papers citing "Does Gender Matter? Towards Fairness in Dialogue Systems"

50 / 93 papers shown
Title
Extended LSTM: Adaptive Feature Gating for Toxic Comment Classification
Extended LSTM: Adaptive Feature Gating for Toxic Comment Classification
Noor Islam S. Mohammad
60
0
0
19 Oct 2025
REFER: Mitigating Bias in Opinion Summarisation via Frequency Framed Prompting
REFER: Mitigating Bias in Opinion Summarisation via Frequency Framed Prompting
Nannan Huang
Haytham M. Fayek
Xiuzhen Zhang
96
0
0
19 Sep 2025
FairLangProc: A Python package for fairness in NLP
FairLangProc: A Python package for fairness in NLP
Arturo Pérez-Peralta
Sandra Benítez-Peña
Rosa E. Lillo
148
0
0
05 Aug 2025
Quantifying Misattribution Unfairness in Authorship Attribution
Quantifying Misattribution Unfairness in Authorship AttributionAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Pegah Alipoormolabashi
Ajay Patel
Niranjan Balasubramanian
159
0
0
02 Jun 2025
Relative Bias: A Comparative Framework for Quantifying Bias in LLMs
Alireza Arbabi
Florian Kerschbaum
313
0
0
22 May 2025
LFTF: Locating First and Then Fine-Tuning for Mitigating Gender Bias in Large Language Models
LFTF: Locating First and Then Fine-Tuning for Mitigating Gender Bias in Large Language Models
Zhanyue Qin
Yue Ding
Deyuan Liu
Qingbin Liu
Junxian Cai
Xi Chen
Zhiying Tu
Dianhui Chu
Cuiyun Gao
Dianbo Sui
180
0
0
21 May 2025
SAGE: A Generic Framework for LLM Safety Evaluation
SAGE: A Generic Framework for LLM Safety Evaluation
Madhur Jindal
Hari Shrawgi
Parag Agrawal
Sandipan Dandapat
ELM
327
2
0
28 Apr 2025
Metamorphic Testing for Fairness Evaluation in Large Language Models: Identifying Intersectional Bias in LLaMA and GPT
Metamorphic Testing for Fairness Evaluation in Large Language Models: Identifying Intersectional Bias in LLaMA and GPTInternational Conference on Software Engineering Research and Applications (ICSERA), 2025
Harishwar Reddy
Madhusudan Srinivasan
Upulee Kanewala
158
2
0
04 Apr 2025
Do Existing Testing Tools Really Uncover Gender Bias in Text-to-Image Models?
Do Existing Testing Tools Really Uncover Gender Bias in Text-to-Image Models?
Yunbo Lyu
Zhou Yang
Ye Liu
Jing Jiang
David Lo
343
2
0
27 Jan 2025
Bias Vector: Mitigating Biases in Language Models with Task Arithmetic
  Approach
Bias Vector: Mitigating Biases in Language Models with Task Arithmetic ApproachInternational Conference on Computational Linguistics (COLING), 2024
Daiki Shirafuji
Makoto Takenaka
Shinya Taguchi
LLMAG
240
10
0
16 Dec 2024
Towards Resource Efficient and Interpretable Bias Mitigation in Large
  Language Models
Towards Resource Efficient and Interpretable Bias Mitigation in Large Language Models
S. Tong
Eliott Zemour
Rawisara Lohanimit
Lalana Kagal
210
0
0
02 Dec 2024
Mitigating Gender Bias in Code Large Language Models via Model Editing
Mitigating Gender Bias in Code Large Language Models via Model Editing
Zhan Qin
Haochuan Wang
Zecheng Wang
Deyuan Liu
Cunhang Fan
Zhao Lv
Zhiying Tu
Dianhui Chu
Dianbo Sui
KELM
181
3
0
10 Oct 2024
No Free Lunch: Retrieval-Augmented Generation Undermines Fairness in
  LLMs, Even for Vigilant Users
No Free Lunch: Retrieval-Augmented Generation Undermines Fairness in LLMs, Even for Vigilant Users
Mengxuan Hu
Hongyi Wu
Zihan Guan
Ronghang Zhu
Dongliang Guo
Daiqing Qi
Sheng Li
SILM
265
11
0
10 Oct 2024
MABR: Multilayer Adversarial Bias Removal Without Prior Bias Knowledge
MABR: Multilayer Adversarial Bias Removal Without Prior Bias KnowledgeAAAI Conference on Artificial Intelligence (AAAI), 2024
Maxwell J. Yin
Boyu Wang
Charles Ling
299
0
0
10 Aug 2024
MBBQ: A Dataset for Cross-Lingual Comparison of Stereotypes in
  Generative LLMs
MBBQ: A Dataset for Cross-Lingual Comparison of Stereotypes in Generative LLMs
Vera Neplenbroek
Arianna Bisazza
Raquel Fernández
309
22
0
11 Jun 2024
Deconstructing The Ethics of Large Language Models from Long-standing
  Issues to New-emerging Dilemmas
Deconstructing The Ethics of Large Language Models from Long-standing Issues to New-emerging Dilemmas
Chengyuan Deng
Yiqun Duan
Xin Jin
Heng Chang
Yijun Tian
...
Kuofeng Gao
Sihong He
Jun Zhuang
Lu Cheng
Haohan Wang
AILaw
234
28
0
08 Jun 2024
The Life Cycle of Large Language Models: A Review of Biases in Education
The Life Cycle of Large Language Models: A Review of Biases in Education
Jinsook Lee
Yann Hicke
Renzhe Yu
Christopher A. Brooks
René F. Kizilcec
AI4Ed
263
4
0
03 Jun 2024
A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language
  Models
A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language ModelsKnowledge Discovery and Data Mining (KDD), 2024
Wenqi Fan
Yujuan Ding
Liang-bo Ning
Shijie Wang
Hengyun Li
D. Yin
Tat-Seng Chua
Qing Li
RALM3DV
546
572
0
10 May 2024
Graph Machine Learning in the Era of Large Language Models (LLMs)
Graph Machine Learning in the Era of Large Language Models (LLMs)
Wenqi Fan
Shijie Wang
Jiani Huang
Zhikai Chen
Yu Song
...
Haitao Mao
Hui Liu
Xiaorui Liu
D. Yin
Qing Li
AI4CE
405
40
0
23 Apr 2024
Unifying Bias and Unfairness in Information Retrieval: A Survey of
  Challenges and Opportunities with Large Language Models
Unifying Bias and Unfairness in Information Retrieval: A Survey of Challenges and Opportunities with Large Language Models
Sunhao Dai
Chen Xu
Shicheng Xu
Liang Pang
Zhenhua Dong
Jun Xu
283
3
0
17 Apr 2024
FairPair: A Robust Evaluation of Biases in Language Models through
  Paired Perturbations
FairPair: A Robust Evaluation of Biases in Language Models through Paired Perturbations
Jane Dwivedi-Yu
Raaz Dwivedi
Timo Schick
171
5
0
09 Apr 2024
AXOLOTL: Fairness through Assisted Self-Debiasing of Large Language
  Model Outputs
AXOLOTL: Fairness through Assisted Self-Debiasing of Large Language Model Outputs
Sana Ebrahimi
Kaiwen Chen
Abolfazl Asudeh
Gautam Das
Nick Koudas
174
10
0
01 Mar 2024
KoDialogBench: Evaluating Conversational Understanding of Language
  Models with Korean Dialogue Benchmark
KoDialogBench: Evaluating Conversational Understanding of Language Models with Korean Dialogue Benchmark
Seongbo Jang
Seonghyeon Lee
Hwanjo Yu
ELM
213
3
0
27 Feb 2024
Potential and Challenges of Model Editing for Social Debiasing
Potential and Challenges of Model Editing for Social Debiasing
Jianhao Yan
Futing Wang
Yafu Li
Yue Zhang
KELM
248
12
0
21 Feb 2024
Self-Debiasing Large Language Models: Zero-Shot Recognition and
  Reduction of Stereotypes
Self-Debiasing Large Language Models: Zero-Shot Recognition and Reduction of Stereotypes
Isabel O. Gallegos
Ryan Rossi
Joe Barrow
Md Mehrab Tanjim
Tong Yu
Hanieh Deilamsalehy
Ruiyi Zhang
Sungchul Kim
Franck Dernoncourt
192
49
0
03 Feb 2024
Fortifying Ethical Boundaries in AI: Advanced Strategies for Enhancing
  Security in Large Language Models
Fortifying Ethical Boundaries in AI: Advanced Strategies for Enhancing Security in Large Language Models
Yunhong He
Jianling Qiu
Wei Zhang
Zhe Yuan
146
3
0
27 Jan 2024
Tackling Bias in Pre-trained Language Models: Current Trends and
  Under-represented Societies
Tackling Bias in Pre-trained Language Models: Current Trends and Under-represented Societies
Vithya Yogarajan
Gillian Dobbie
Te Taka Keegan
R. Neuwirth
ALM
323
17
0
03 Dec 2023
Step by Step to Fairness: Attributing Societal Bias in Task-oriented
  Dialogue Systems
Step by Step to Fairness: Attributing Societal Bias in Task-oriented Dialogue Systems
Hsuan Su
Rebecca Qian
Chinnadhurai Sankar
Shahin Shayandeh
Shang-Tse Chen
Hung-yi Lee
Daniel M. Bikel
219
1
0
11 Nov 2023
Learning from Red Teaming: Gender Bias Provocation and Mitigation in
  Large Language Models
Learning from Red Teaming: Gender Bias Provocation and Mitigation in Large Language Models
Hsuan Su
Cheng-Chu Cheng
Hua Farn
Shachi H. Kumar
Saurav Sahay
Shang-Tse Chen
Hung-yi Lee
150
6
0
17 Oct 2023
Mitigating Bias for Question Answering Models by Tracking Bias Influence
Mitigating Bias for Question Answering Models by Tracking Bias InfluenceNorth American Chapter of the Association for Computational Linguistics (NAACL), 2023
Mingyu Derek Ma
Jiun-Yu Kao
Arpit Gupta
Yu-Hsiang Lin
Wenbo Zhao
Tagyoung Chung
Wei Wang
Kai-Wei Chang
Nanyun Peng
217
9
0
13 Oct 2023
Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona
  Biases in Dialogue Systems
Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue SystemsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Yixin Wan
Jieyu Zhao
Vasu Sharma
Nanyun Peng
Kai-Wei Chang
343
41
0
08 Oct 2023
Survey of Social Bias in Vision-Language Models
Survey of Social Bias in Vision-Language Models
Nayeon Lee
Yejin Bang
Holy Lovenia
Samuel Cahyawijaya
Wenliang Dai
Pascale Fung
VLM
345
29
0
24 Sep 2023
Are You Worthy of My Trust?: A Socioethical Perspective on the Impacts
  of Trustworthy AI Systems on the Environment and Human Society
Are You Worthy of My Trust?: A Socioethical Perspective on the Impacts of Trustworthy AI Systems on the Environment and Human Society
Jamell Dacon
SILM
194
2
0
18 Sep 2023
Bias and Fairness in Large Language Models: A Survey
Bias and Fairness in Large Language Models: A SurveyComputational Linguistics (CL), 2023
Isabel O. Gallegos
Ryan Rossi
Joe Barrow
Md Mehrab Tanjim
Sungchul Kim
Franck Dernoncourt
Tong Yu
Ruiyi Zhang
Nesreen Ahmed
AILaw
374
869
0
02 Sep 2023
Adversarial Fine-Tuning of Language Models: An Iterative Optimisation
  Approach for the Generation and Detection of Problematic Content
Adversarial Fine-Tuning of Language Models: An Iterative Optimisation Approach for the Generation and Detection of Problematic Content
Charles OÑeill
Jack Miller
I. Ciucă
Y. Ting 丁
Thang Bui
147
9
0
26 Aug 2023
Learning to Generate Equitable Text in Dialogue from Biased Training
  Data
Learning to Generate Equitable Text in Dialogue from Biased Training DataAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Anthony Sicilia
Malihe Alikhani
267
21
0
10 Jul 2023
Recommender Systems in the Era of Large Language Models (LLMs)
Recommender Systems in the Era of Large Language Models (LLMs)IEEE Transactions on Knowledge and Data Engineering (TKDE), 2023
Zihuai Zhao
Wenqi Fan
Jiatong Li
Yunqing Liu
Xiaowei Mei
...
Zhen Wen
Fei Wang
Xiangyu Zhao
Shucheng Zhou
Qing Li
KELM
536
425
0
05 Jul 2023
CBBQ: A Chinese Bias Benchmark Dataset Curated with Human-AI
  Collaboration for Large Language Models
CBBQ: A Chinese Bias Benchmark Dataset Curated with Human-AI Collaboration for Large Language ModelsInternational Conference on Language Resources and Evaluation (LREC), 2023
Yufei Huang
Deyi Xiong
ALM
247
23
0
28 Jun 2023
Sociodemographic Bias in Language Models: A Survey and Forward Path
Sociodemographic Bias in Language Models: A Survey and Forward Path
Vipul Gupta
Pranav Narayanan Venkit
Shomir Wilson
R. Passonneau
406
34
0
13 Jun 2023
Exposing Bias in Online Communities through Large-Scale Language Models
Exposing Bias in Online Communities through Large-Scale Language Models
Celine Wald
Lukas Pfahler
136
7
0
04 Jun 2023
Healing Unsafe Dialogue Responses with Weak Supervision Signals
Healing Unsafe Dialogue Responses with Weak Supervision Signals
Zi Liang
Pinghui Wang
Ruofei Zhang
Shuo Zhang
Xiaofan Ye Yi Huang
Junlan Feng
135
1
0
25 May 2023
Reducing Sensitivity on Speaker Names for Text Generation from Dialogues
Reducing Sensitivity on Speaker Names for Text Generation from DialoguesAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Qi Jia
Haifeng Tang
Kenny Q. Zhu
170
2
0
23 May 2023
BiasAsker: Measuring the Bias in Conversational AI System
BiasAsker: Measuring the Bias in Conversational AI System
Yuxuan Wan
Wenxuan Wang
Pinjia He
Jiazhen Gu
Haonan Bai
Michael Lyu
192
82
0
21 May 2023
CHBias: Bias Evaluation and Mitigation of Chinese Conversational
  Language Models
CHBias: Bias Evaluation and Mitigation of Chinese Conversational Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Jiaxu Zhao
Meng Fang
Zijing Shi
Yitong Li
Ling-Hao Chen
Mykola Pechenizkiy
173
31
0
18 May 2023
From Pretraining Data to Language Models to Downstream Tasks: Tracking
  the Trails of Political Biases Leading to Unfair NLP Models
From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Shangbin Feng
Chan Young Park
Yuhan Liu
Yulia Tsvetkov
419
319
0
15 May 2023
On the Independence of Association Bias and Empirical Fairness in
  Language Models
On the Independence of Association Bias and Empirical Fairness in Language ModelsConference on Fairness, Accountability and Transparency (FAccT), 2023
Laura Cabello
Anna Katrine van Zee
Anders Søgaard
163
35
0
20 Apr 2023
Harnessing Knowledge and Reasoning for Human-Like Natural Language
  Generation: A Brief Review
Harnessing Knowledge and Reasoning for Human-Like Natural Language Generation: A Brief ReviewIEEE Data Engineering Bulletin (DEB), 2022
Jiangjie Chen
Yanghua Xiao
208
5
0
07 Dec 2022
Language Generation Models Can Cause Harm: So What Can We Do About It?
  An Actionable Survey
Language Generation Models Can Cause Harm: So What Can We Do About It? An Actionable SurveyConference of the European Chapter of the Association for Computational Linguistics (EACL), 2022
Sachin Kumar
Vidhisha Balachandran
Lucille Njoo
Antonios Anastasopoulos
Yulia Tsvetkov
ELM
378
105
0
14 Oct 2022
The User-Aware Arabic Gender Rewriter
The User-Aware Arabic Gender Rewriter
Bashar Alhafni
Ossama Obeid
Farah E. Shamout
175
3
0
14 Oct 2022
Unified Detoxifying and Debiasing in Language Generation via
  Inference-time Adaptive Optimization
Unified Detoxifying and Debiasing in Language Generation via Inference-time Adaptive OptimizationInternational Conference on Learning Representations (ICLR), 2022
Zonghan Yang
Xiaoyuan Yi
Peng Li
Yang Liu
Xing Xie
255
41
0
10 Oct 2022
12
Next