ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.10486
  4. Cited By
Does Gender Matter? Towards Fairness in Dialogue Systems

Does Gender Matter? Towards Fairness in Dialogue Systems

16 October 2019
Haochen Liu
Jamell Dacon
Wenqi Fan
Hui Liu
Zitao Liu
Jiliang Tang
ArXivPDFHTML

Papers citing "Does Gender Matter? Towards Fairness in Dialogue Systems"

17 / 17 papers shown
Title
$\texttt{SAGE}$: A Generic Framework for LLM Safety Evaluation
SAGE\texttt{SAGE}SAGE: A Generic Framework for LLM Safety Evaluation
Madhur Jindal
Hari Shrawgi
Parag Agrawal
Sandipan Dandapat
ELM
47
0
0
28 Apr 2025
Do Existing Testing Tools Really Uncover Gender Bias in Text-to-Image Models?
Yunbo Lyu
Zhou Yang
Yuqing Niu
Jing Jiang
David Lo
32
1
0
28 Jan 2025
No Free Lunch: Retrieval-Augmented Generation Undermines Fairness in
  LLMs, Even for Vigilant Users
No Free Lunch: Retrieval-Augmented Generation Undermines Fairness in LLMs, Even for Vigilant Users
Mengxuan Hu
Hongyi Wu
Zihan Guan
Ronghang Zhu
Dongliang Guo
Daiqing Qi
Sheng Li
SILM
33
3
0
10 Oct 2024
A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language
  Models
A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models
Wenqi Fan
Yujuan Ding
Liang-bo Ning
Shijie Wang
Hengyun Li
Dawei Yin
Tat-Seng Chua
Qing Li
RALM
3DV
38
181
0
10 May 2024
Learning to Generate Equitable Text in Dialogue from Biased Training
  Data
Learning to Generate Equitable Text in Dialogue from Biased Training Data
Anthony Sicilia
Malihe Alikhani
38
15
0
10 Jul 2023
Recommender Systems in the Era of Large Language Models (LLMs)
Recommender Systems in the Era of Large Language Models (LLMs)
Zihuai Zhao
Wenqi Fan
Jiatong Li
Yunqing Liu
Xiaowei Mei
...
Zhen Wen
Fei Wang
Xiangyu Zhao
Jiliang Tang
Qing Li
KELM
61
308
0
05 Jul 2023
The User-Aware Arabic Gender Rewriter
The User-Aware Arabic Gender Rewriter
Bashar Alhafni
Ossama Obeid
Nizar Habash
21
2
0
14 Oct 2022
AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq
  Model
AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
Saleh Soltan
Shankar Ananthakrishnan
Jack G. M. FitzGerald
Rahul Gupta
Wael Hamza
...
Mukund Sridhar
Fabian Triefenbach
Apurv Verma
Gökhan Tür
Premkumar Natarajan
39
82
0
02 Aug 2022
Detecting Harmful Online Conversational Content towards LGBTQIA+
  Individuals
Detecting Harmful Online Conversational Content towards LGBTQIA+ Individuals
Jamell Dacon
Harry Shomer
Shaylynn Crum-Dacon
Jiliang Tang
11
8
0
15 Jun 2022
Speciesist Language and Nonhuman Animal Bias in English Masked Language
  Models
Speciesist Language and Nonhuman Animal Bias in English Masked Language Models
Masashi Takeshita
Rafal Rzepka
K. Araki
24
6
0
10 Mar 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
306
11,909
0
04 Mar 2022
Jointly Attacking Graph Neural Network and its Explanations
Jointly Attacking Graph Neural Network and its Explanations
Wenqi Fan
Wei Jin
Xiaorui Liu
Han Xu
Xianfeng Tang
Suhang Wang
Qing Li
Jiliang Tang
Jianping Wang
Charu C. Aggarwal
AAML
37
28
0
07 Aug 2021
Anticipating Safety Issues in E2E Conversational AI: Framework and
  Tooling
Anticipating Safety Issues in E2E Conversational AI: Framework and Tooling
Emily Dinan
Gavin Abercrombie
A. S. Bergman
Shannon L. Spruit
Dirk Hovy
Y-Lan Boureau
Verena Rieser
32
105
0
07 Jul 2021
A Survey of Race, Racism, and Anti-Racism in NLP
A Survey of Race, Racism, and Anti-Racism in NLP
Anjalie Field
Su Lin Blodgett
Zeerak Talat
Yulia Tsvetkov
21
122
0
21 Jun 2021
NestedVAE: Isolating Common Factors via Weak Supervision
NestedVAE: Isolating Common Factors via Weak Supervision
M. Vowels
Necati Cihan Camgöz
Richard Bowden
CML
DRL
6
21
0
26 Feb 2020
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Emily Dinan
Angela Fan
Adina Williams
Jack Urbanek
Douwe Kiela
Jason Weston
22
205
0
10 Nov 2019
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
294
4,203
0
23 Aug 2019
1