ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1807.11714
  4. Cited By
Gender Bias in Neural Natural Language Processing

Gender Bias in Neural Natural Language Processing

31 July 2018
Kaiji Lu
Piotr (Peter) Mardziel
Fangjing Wu
Preetam Amancharla
Anupam Datta
ArXivPDFHTML

Papers citing "Gender Bias in Neural Natural Language Processing"

48 / 48 papers shown
Title
A Comparative Analysis of Ethical and Safety Gaps in LLMs using Relative Danger Coefficient
A Comparative Analysis of Ethical and Safety Gaps in LLMs using Relative Danger Coefficient
Yehor Tereshchenko
Mika Hämäläinen
ELM
40
1
0
06 May 2025
Towards Large Language Models that Benefit for All: Benchmarking Group Fairness in Reward Models
Kefan Song
Jin Yao
Runnan Jiang
Rohan Chandra
Shangtong Zhang
ALM
46
0
0
10 Mar 2025
Addressing Bias in Generative AI: Challenges and Research Opportunities in Information Management
Addressing Bias in Generative AI: Challenges and Research Opportunities in Information Management
Xiahua Wei
Naveen Kumar
Han Zhang
61
3
0
22 Jan 2025
No Free Lunch: Retrieval-Augmented Generation Undermines Fairness in
  LLMs, Even for Vigilant Users
No Free Lunch: Retrieval-Augmented Generation Undermines Fairness in LLMs, Even for Vigilant Users
Mengxuan Hu
Hongyi Wu
Zihan Guan
Ronghang Zhu
Dongliang Guo
Daiqing Qi
Sheng Li
SILM
33
3
0
10 Oct 2024
Post-hoc Study of Climate Microtargeting on Social Media Ads with LLMs: Thematic Insights and Fairness Evaluation
Post-hoc Study of Climate Microtargeting on Social Media Ads with LLMs: Thematic Insights and Fairness Evaluation
Tunazzina Islam
Dan Goldwasser
36
1
0
07 Oct 2024
Collapsed Language Models Promote Fairness
Collapsed Language Models Promote Fairness
Jingxuan Xu
Wuyang Chen
Linyi Li
Yao Zhao
Yunchao Wei
39
0
0
06 Oct 2024
Towards Understanding Task-agnostic Debiasing Through the Lenses of
  Intrinsic Bias and Forgetfulness
Towards Understanding Task-agnostic Debiasing Through the Lenses of Intrinsic Bias and Forgetfulness
Guangliang Liu
Milad Afshari
Xitong Zhang
Zhiyu Xue
Avrajit Ghosh
Bidhan Bashyal
Rongrong Wang
K. Johnson
27
0
0
06 Jun 2024
Hire Me or Not? Examining Language Model's Behavior with Occupation Attributes
Hire Me or Not? Examining Language Model's Behavior with Occupation Attributes
Damin Zhang
Yi Zhang
Geetanjali Bihani
Julia Taylor Rayz
48
2
0
06 May 2024
Measuring Bias in a Ranked List using Term-based Representations
Measuring Bias in a Ranked List using Term-based Representations
Amin Abolghasemi
Leif Azzopardi
Arian Askari
Maarten de Rijke
Suzan Verberne
34
6
0
09 Mar 2024
Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender Perturbation over Fairytale Texts
Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender Perturbation over Fairytale Texts
Christina Chance
Da Yin
Dakuo Wang
Kai-Wei Chang
34
0
0
16 Oct 2023
A Survey on Fairness in Large Language Models
A Survey on Fairness in Large Language Models
Yingji Li
Mengnan Du
Rui Song
Xin Wang
Ying Wang
ALM
37
59
0
20 Aug 2023
Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A
  Two-Stage Approach to Mitigate Social Biases
Prompt Tuning Pushes Farther, Contrastive Learning Pulls Closer: A Two-Stage Approach to Mitigate Social Biases
Yingji Li
Mengnan Du
Xin Wang
Ying Wang
45
26
0
04 Jul 2023
Long-form analogies generated by chatGPT lack human-like
  psycholinguistic properties
Long-form analogies generated by chatGPT lack human-like psycholinguistic properties
S. M. Seals
V. Shalin
16
10
0
07 Jun 2023
Out-of-Distribution Generalization in Text Classification: Past,
  Present, and Future
Out-of-Distribution Generalization in Text Classification: Past, Present, and Future
Linyi Yang
Y. Song
Xuan Ren
Chenyang Lyu
Yidong Wang
Lingqiao Liu
Jindong Wang
Jennifer Foster
Yue Zhang
OOD
20
2
0
23 May 2023
Should We Attend More or Less? Modulating Attention for Fairness
Should We Attend More or Less? Modulating Attention for Fairness
A. Zayed
Gonçalo Mordido
Samira Shabanian
Sarath Chandar
35
10
0
22 May 2023
ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores
  Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource
  Languages
ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource Languages
Sourojit Ghosh
Aylin Caliskan
33
69
0
17 May 2023
Logic Against Bias: Textual Entailment Mitigates Stereotypical Sentence
  Reasoning
Logic Against Bias: Textual Entailment Mitigates Stereotypical Sentence Reasoning
Hongyin Luo
James R. Glass
NAI
21
7
0
10 Mar 2023
Synthcity: facilitating innovative use cases of synthetic data in
  different data modalities
Synthcity: facilitating innovative use cases of synthetic data in different data modalities
Zhaozhi Qian
B. Cebere
M. Schaar
SyDa
28
57
0
18 Jan 2023
CORGI-PM: A Chinese Corpus For Gender Bias Probing and Mitigation
CORGI-PM: A Chinese Corpus For Gender Bias Probing and Mitigation
Ge Zhang
Yizhi Li
Yaoyao Wu
Linyuan Zhang
Chenghua Lin
Jiayi Geng
Shi Wang
Jie Fu
27
10
0
01 Jan 2023
Foundation models in brief: A historical, socio-technical focus
Foundation models in brief: A historical, socio-technical focus
Johannes Schneider
VLM
21
9
0
17 Dec 2022
Deep Causal Learning: Representation, Discovery and Inference
Deep Causal Learning: Representation, Discovery and Inference
Zizhen Deng
Xiaolong Zheng
Hu Tian
D. Zeng
CML
BDL
26
11
0
07 Nov 2022
The Shared Task on Gender Rewriting
The Shared Task on Gender Rewriting
Bashar Alhafni
Nizar Habash
Houda Bouamor
Ossama Obeid
Sultan Alrowili
...
Mohamed Gabr
Abderrahmane Issam
Abdelrahim Qaddoumi
K. Vijay-Shanker
Mahmoud Zyate
27
1
0
22 Oct 2022
AugCSE: Contrastive Sentence Embedding with Diverse Augmentations
AugCSE: Contrastive Sentence Embedding with Diverse Augmentations
Zilu Tang
Muhammed Yusuf Kocyigit
Derry Wijaya
23
8
0
20 Oct 2022
The User-Aware Arabic Gender Rewriter
The User-Aware Arabic Gender Rewriter
Bashar Alhafni
Ossama Obeid
Nizar Habash
21
2
0
14 Oct 2022
FairDistillation: Mitigating Stereotyping in Language Models
FairDistillation: Mitigating Stereotyping in Language Models
Pieter Delobelle
Bettina Berendt
20
8
0
10 Jul 2022
What Changed? Investigating Debiasing Methods using Causal Mediation
  Analysis
What Changed? Investigating Debiasing Methods using Causal Mediation Analysis
Su-Ha Jeoung
Jana Diesner
CML
19
7
0
01 Jun 2022
Using Natural Sentences for Understanding Biases in Language Models
Using Natural Sentences for Understanding Biases in Language Models
Sarah Alnegheimish
Alicia Guo
Yi Sun
17
21
0
12 May 2022
Synthetic Data -- what, why and how?
Synthetic Data -- what, why and how?
James Jordon
Lukasz Szpruch
F. Houssiau
M. Bottarelli
Giovanni Cherubin
Carsten Maple
Samuel N. Cohen
Adrian Weller
35
109
0
06 May 2022
Informativeness and Invariance: Two Perspectives on Spurious
  Correlations in Natural Language
Informativeness and Invariance: Two Perspectives on Spurious Correlations in Natural Language
Jacob Eisenstein
CML
26
25
0
09 Apr 2022
Making a (Counterfactual) Difference One Rationale at a Time
Making a (Counterfactual) Difference One Rationale at a Time
Michael J. Plyler
Michal Green
Min Chi
16
10
0
13 Jan 2022
A Survey on Gender Bias in Natural Language Processing
A Survey on Gender Bias in Natural Language Processing
Karolina Stañczak
Isabelle Augenstein
26
109
0
28 Dec 2021
Sparse Interventions in Language Models with Differentiable Masking
Sparse Interventions in Language Models with Differentiable Masking
Nicola De Cao
Leon Schmid
Dieuwke Hupkes
Ivan Titov
25
27
0
13 Dec 2021
NL-Augmenter: A Framework for Task-Sensitive Natural Language
  Augmentation
NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Kaustubh D. Dhole
Varun Gangal
Sebastian Gehrmann
Aadesh Gupta
Zhenhao Li
...
Tianbao Xie
Usama Yaseen
Michael A. Yee
Jing Zhang
Yue Zhang
169
86
0
06 Dec 2021
Reason first, then respond: Modular Generation for Knowledge-infused
  Dialogue
Reason first, then respond: Modular Generation for Knowledge-infused Dialogue
Leonard Adolphs
Kurt Shuster
Jack Urbanek
Arthur Szlam
Jason Weston
KELM
LRM
204
41
0
09 Nov 2021
DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative
  Networks
DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative Networks
A. Saha
Trent Kyono
J. Linmans
M. Schaar
CML
16
105
0
25 Oct 2021
BBQ: A Hand-Built Bias Benchmark for Question Answering
BBQ: A Hand-Built Bias Benchmark for Question Answering
Alicia Parrish
Angelica Chen
Nikita Nangia
Vishakh Padmakumar
Jason Phang
Jana Thompson
Phu Mon Htut
Sam Bowman
212
367
0
15 Oct 2021
Enhancing Model Robustness and Fairness with Causality: A Regularization
  Approach
Enhancing Model Robustness and Fairness with Causality: A Regularization Approach
Zhao Wang
Kai Shu
A. Culotta
OOD
13
14
0
03 Oct 2021
Balancing out Bias: Achieving Fairness Through Balanced Training
Balancing out Bias: Achieving Fairness Through Balanced Training
Xudong Han
Timothy Baldwin
Trevor Cohn
16
39
0
16 Sep 2021
Mitigating Language-Dependent Ethnic Bias in BERT
Mitigating Language-Dependent Ethnic Bias in BERT
Jaimeen Ahn
Alice H. Oh
131
91
0
13 Sep 2021
Sustainable Modular Debiasing of Language Models
Sustainable Modular Debiasing of Language Models
Anne Lauscher
Tobias Lüken
Goran Glavas
47
120
0
08 Sep 2021
An Investigation of the (In)effectiveness of Counterfactually Augmented
  Data
An Investigation of the (In)effectiveness of Counterfactually Augmented Data
Nitish Joshi
He He
OODD
19
46
0
01 Jul 2021
A Survey of Data Augmentation Approaches for NLP
A Survey of Data Augmentation Approaches for NLP
Steven Y. Feng
Varun Gangal
Jason W. Wei
Sarath Chandar
Soroush Vosoughi
Teruko Mitamura
Eduard H. Hovy
AIMat
29
796
0
07 May 2021
Causal Learning for Socially Responsible AI
Causal Learning for Socially Responsible AI
Lu Cheng
Ahmadreza Mosallanezhad
Paras Sheth
Huan Liu
63
13
0
25 Apr 2021
Movement Pruning: Adaptive Sparsity by Fine-Tuning
Movement Pruning: Adaptive Sparsity by Fine-Tuning
Victor Sanh
Thomas Wolf
Alexander M. Rush
11
466
0
15 May 2020
Reducing Gender Bias in Neural Machine Translation as a Domain
  Adaptation Problem
Reducing Gender Bias in Neural Machine Translation as a Domain Adaptation Problem
Danielle Saunders
Bill Byrne
AI4CE
14
136
0
09 Apr 2020
Evaluating Models' Local Decision Boundaries via Contrast Sets
Evaluating Models' Local Decision Boundaries via Contrast Sets
Matt Gardner
Yoav Artzi
Victoria Basmova
Jonathan Berant
Ben Bogin
...
Sanjay Subramanian
Reut Tsarfaty
Eric Wallace
Ally Zhang
Ben Zhou
ELM
35
84
0
06 Apr 2020
Learning the Difference that Makes a Difference with
  Counterfactually-Augmented Data
Learning the Difference that Makes a Difference with Counterfactually-Augmented Data
Divyansh Kaushik
Eduard H. Hovy
Zachary Chase Lipton
CML
9
559
0
26 Sep 2019
Conceptor Debiasing of Word Representations Evaluated on WEAT
Conceptor Debiasing of Word Representations Evaluated on WEAT
S. Karve
Lyle Ungar
João Sedoc
FaML
14
33
0
14 Jun 2019
1