ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.03035
  4. Cited By
Identifying and Reducing Gender Bias in Word-Level Language Models

Identifying and Reducing Gender Bias in Word-Level Language Models

5 April 2019
Shikha Bordia
Samuel R. Bowman
    FaML
ArXivPDFHTML

Papers citing "Identifying and Reducing Gender Bias in Word-Level Language Models"

50 / 207 papers shown
Title
Taxonomy-based CheckList for Large Language Model Evaluation
Taxonomy-based CheckList for Large Language Model Evaluation
Damin Zhang
17
0
0
15 Dec 2023
RoAST: Robustifying Language Models via Adversarial Perturbation with
  Selective Training
RoAST: Robustifying Language Models via Adversarial Perturbation with Selective Training
Jaehyung Kim
Yuning Mao
Rui Hou
Hanchao Yu
Davis Liang
Pascale Fung
Qifan Wang
Fuli Feng
Lifu Huang
Madian Khabsa
AAML
23
2
0
07 Dec 2023
Weakly Supervised Detection of Hallucinations in LLM Activations
Weakly Supervised Detection of Hallucinations in LLM Activations
Miriam Rateike
C. Cintas
John Wamburu
Tanya Akumu
Skyler Speakman
28
11
0
05 Dec 2023
A Survey on Large Language Model (LLM) Security and Privacy: The Good,
  the Bad, and the Ugly
A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly
Yifan Yao
Jinhao Duan
Kaidi Xu
Yuanfang Cai
Eric Sun
Yue Zhang
PILM
ELM
39
475
0
04 Dec 2023
Tackling Bias in Pre-trained Language Models: Current Trends and
  Under-represented Societies
Tackling Bias in Pre-trained Language Models: Current Trends and Under-represented Societies
Vithya Yogarajan
Gillian Dobbie
Te Taka Keegan
R. Neuwirth
ALM
43
11
0
03 Dec 2023
Potential Societal Biases of ChatGPT in Higher Education: A Scoping Review
Potential Societal Biases of ChatGPT in Higher Education: A Scoping Review
Ming Li
Ariunaa Enkhtur
B. Yamamoto
Fei Cheng
Lilan Chen
AI4CE
26
3
0
24 Nov 2023
Towards Auditing Large Language Models: Improving Text-based Stereotype
  Detection
Towards Auditing Large Language Models: Improving Text-based Stereotype Detection
Wu Zekun
Sahan Bulathwela
Adriano Soares Koshiyama
28
12
0
23 Nov 2023
Can Language Model Moderators Improve the Health of Online Discourse?
Can Language Model Moderators Improve the Health of Online Discourse?
Hyundong Justin Cho
Shuai Liu
Taiwei Shi
Darpan Jain
Basem Rizk
...
Zixun Lu
Nuan Wen
Jonathan Gratch
Emilio Ferrera
Jonathan May
AI4MH
32
13
0
16 Nov 2023
P^3SUM: Preserving Author's Perspective in News Summarization with
  Diffusion Language Models
P^3SUM: Preserving Author's Perspective in News Summarization with Diffusion Language Models
Yuhan Liu
Shangbin Feng
Xiaochuang Han
Vidhisha Balachandran
Chan Young Park
Sachin Kumar
Yulia Tsvetkov
DiffM
41
2
0
16 Nov 2023
Tailoring with Targeted Precision: Edit-Based Agents for Open-Domain
  Procedure Customization
Tailoring with Targeted Precision: Edit-Based Agents for Open-Domain Procedure Customization
Yash Kumar Lal
Li Zhang
Faeze Brahman
Bodhisattwa Prasad Majumder
Peter Clark
Niket Tandon
KELM
22
3
0
16 Nov 2023
A Survey of AI Text-to-Image and AI Text-to-Video Generators
A Survey of AI Text-to-Image and AI Text-to-Video Generators
Aditi Singh
21
19
0
10 Nov 2023
Unraveling Downstream Gender Bias from Large Language Models: A Study on
  AI Educational Writing Assistance
Unraveling Downstream Gender Bias from Large Language Models: A Study on AI Educational Writing Assistance
Thiemo Wambsganss
Xiaotian Su
Vinitra Swamy
Seyed Parsa Neshaei
Roman Rietsche
Tanja Kaser
26
18
0
06 Nov 2023
Probing Explicit and Implicit Gender Bias through LLM Conditional Text
  Generation
Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation
Xiangjue Dong
Yibo Wang
Philip S. Yu
James Caverlee
30
26
0
01 Nov 2023
Generative Language Models Exhibit Social Identity Biases
Generative Language Models Exhibit Social Identity Biases
Tiancheng Hu
Yara Kyrychenko
Steve Rathje
Nigel Collier
S. V. D. Linden
Jon Roozenbeek
30
37
0
24 Oct 2023
What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts
  and Rationales for Disambiguating Defeasible Social and Moral Situations
What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts and Rationales for Disambiguating Defeasible Social and Moral Situations
Kavel Rao
Liwei Jiang
Valentina Pyatkin
Yuling Gu
Niket Tandon
Nouha Dziri
Faeze Brahman
Yejin Choi
26
15
0
24 Oct 2023
Identifying and Adapting Transformer-Components Responsible for Gender
  Bias in an English Language Model
Identifying and Adapting Transformer-Components Responsible for Gender Bias in an English Language Model
Abhijith Chintam
Rahel Beloch
Willem H. Zuidema
Michael Hanna
Oskar van der Wal
28
16
0
19 Oct 2023
Learning from Red Teaming: Gender Bias Provocation and Mitigation in
  Large Language Models
Learning from Red Teaming: Gender Bias Provocation and Mitigation in Large Language Models
Hsuan Su
Cheng-Chu Cheng
Hua Farn
Shachi H. Kumar
Saurav Sahay
Shang-Tse Chen
Hung-yi Lee
23
4
0
17 Oct 2023
"Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in
  LLM-Generated Reference Letters
"Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters
Yixin Wan
George Pu
Jiao Sun
Aparna Garimella
Kai-Wei Chang
Nanyun Peng
34
160
0
13 Oct 2023
Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona
  Biases in Dialogue Systems
Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems
Yixin Wan
Jieyu Zhao
Aman Chadha
Nanyun Peng
Kai-Wei Chang
34
22
0
08 Oct 2023
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by
  Learning to Scale
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale
Markus Frohmann
Carolin Holtermann
Shahed Masoudian
Anne Lauscher
Navid Rekabsaz
34
2
0
02 Oct 2023
Learning Unbiased News Article Representations: A Knowledge-Infused
  Approach
Learning Unbiased News Article Representations: A Knowledge-Infused Approach
Sadia Kamal
Jimmy Hartford
Jeremy Willis
A. Bagavathi
13
1
0
12 Sep 2023
Studying the impacts of pre-training using ChatGPT-generated text on
  downstream tasks
Studying the impacts of pre-training using ChatGPT-generated text on downstream tasks
Sarthak Anand
19
0
0
02 Sep 2023
Bias and Fairness in Large Language Models: A Survey
Bias and Fairness in Large Language Models: A Survey
Isabel O. Gallegos
Ryan A. Rossi
Joe Barrow
Md Mehrab Tanjim
Sungchul Kim
Franck Dernoncourt
Tong Yu
Ruiyi Zhang
Nesreen Ahmed
AILaw
21
490
0
02 Sep 2023
UniPT: Universal Parallel Tuning for Transfer Learning with Efficient
  Parameter and Memory
UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory
Haiwen Diao
Bo Wan
Yuhang Zhang
Xuecong Jia
Huchuan Lu
Long Chen
VLM
31
18
0
28 Aug 2023
Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language
  Models
Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models
Somayeh Ghanbarzadeh
Yan-ping Huang
Hamid Palangi
R. C. Moreno
Hamed Khanpour
32
12
0
20 Jul 2023
National Origin Discrimination in Deep-learning-powered Automated Resume
  Screening
National Origin Discrimination in Deep-learning-powered Automated Resume Screening
Sihang Li
Kuangzheng Li
Haibing Lu
22
3
0
13 Jul 2023
DBFed: Debiasing Federated Learning Framework based on
  Domain-Independent
DBFed: Debiasing Federated Learning Framework based on Domain-Independent
Jiale Li
Zhixin Li
Yibo Wang
Yao Li
Lei Wang
FedML
22
0
0
10 Jul 2023
What Should Data Science Education Do with Large Language Models?
What Should Data Science Education Do with Large Language Models?
Xinming Tu
James Zou
Weijie J. Su
Linjun Zhang
AI4Ed
39
32
0
06 Jul 2023
Towards Measuring the Representation of Subjective Global Opinions in
  Language Models
Towards Measuring the Representation of Subjective Global Opinions in Language Models
Esin Durmus
Karina Nyugen
Thomas I. Liao
Nicholas Schiefer
Amanda Askell
...
Alex Tamkin
Janel Thamkul
Jared Kaplan
Jack Clark
Deep Ganguli
35
207
0
28 Jun 2023
Privacy and Fairness in Federated Learning: on the Perspective of
  Trade-off
Privacy and Fairness in Federated Learning: on the Perspective of Trade-off
Huiqiang Chen
Tianqing Zhu
Tao Zhang
Wanlei Zhou
Philip S. Yu
FedML
29
43
0
25 Jun 2023
TrustGPT: A Benchmark for Trustworthy and Responsible Large Language
  Models
TrustGPT: A Benchmark for Trustworthy and Responsible Large Language Models
Yue Huang
Qihui Zhang
Philip S. Y
Lichao Sun
18
46
0
20 Jun 2023
Gender Bias in Transformer Models: A comprehensive survey
Gender Bias in Transformer Models: A comprehensive survey
Praneeth Nemani
Yericherla Deepak Joel
Pallavi Vijay
Farhana Ferdousi Liza
24
3
0
18 Jun 2023
Sociodemographic Bias in Language Models: A Survey and Forward Path
Sociodemographic Bias in Language Models: A Survey and Forward Path
Vipul Gupta
Pranav Narayanan Venkit
Shomir Wilson
R. Passonneau
42
20
0
13 Jun 2023
Measuring Sentiment Bias in Machine Translation
Measuring Sentiment Bias in Machine Translation
Kai Hartung
Aaricia Herygers
Shubham Kurlekar
Khabbab Zakaria
Taylan Volkan
Sören Gröttrup
Munir Georges
AI4CE
9
5
0
12 Jun 2023
Safety and Fairness for Content Moderation in Generative Models
Safety and Fairness for Content Moderation in Generative Models
Susan Hao
Piyush Kumar
Sarah Laszlo
Shivani Poddar
Bhaktipriya Radharapu
Renee Shelby
EGVM
30
20
0
09 Jun 2023
Bias Against 93 Stigmatized Groups in Masked Language Models and
  Downstream Sentiment Classification Tasks
Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks
Katelyn Mei
Sonia Fereidooni
Aylin Caliskan
16
45
0
08 Jun 2023
Are fairness metric scores enough to assess discrimination biases in
  machine learning?
Are fairness metric scores enough to assess discrimination biases in machine learning?
Fanny Jourdan
Laurent Risser
Jean-Michel Loubes
Nicholas M. Asher
FaML
14
5
0
08 Jun 2023
Towards Coding Social Science Datasets with Language Models
Towards Coding Social Science Datasets with Language Models
Anonymous Acl
Taylor Sorensen
Lisa P. Argyle
Ethan C. Busby
Nancy Fulda
Joshua R Gubler
David Wingate
ALM
SyDa
32
10
0
03 Jun 2023
infoVerse: A Universal Framework for Dataset Characterization with
  Multidimensional Meta-information
infoVerse: A Universal Framework for Dataset Characterization with Multidimensional Meta-information
Jaehyung Kim
Yekyung Kim
Karin de Langis
Jinwoo Shin
Dongyeop Kang
17
1
0
30 May 2023
Detecting and Mitigating Indirect Stereotypes in Word Embeddings
Detecting and Mitigating Indirect Stereotypes in Word Embeddings
Erin E. George
Joyce A. Chew
Deanna Needell
24
0
0
23 May 2023
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language
  Models
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language Models
Seraphina Goldfarb-Tarrant
Eddie L. Ungless
Esma Balkir
Su Lin Blodgett
34
9
0
22 May 2023
BiasAsker: Measuring the Bias in Conversational AI System
BiasAsker: Measuring the Bias in Conversational AI System
Yuxuan Wan
Wenxuan Wang
Pinjia He
Jiazhen Gu
Haonan Bai
Michael Lyu
29
67
0
21 May 2023
CHBias: Bias Evaluation and Mitigation of Chinese Conversational
  Language Models
CHBias: Bias Evaluation and Mitigation of Chinese Conversational Language Models
Jiaxu Zhao
Meng Fang
Zijing Shi
Yitong Li
Ling-Hao Chen
Mykola Pechenizkiy
22
20
0
18 May 2023
ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores
  Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource
  Languages
ChatGPT Perpetuates Gender Bias in Machine Translation and Ignores Non-Gendered Pronouns: Findings across Bengali and Five other Low-Resource Languages
Sourojit Ghosh
Aylin Caliskan
33
69
0
17 May 2023
Think Twice: Measuring the Efficiency of Eliminating Prediction
  Shortcuts of Question Answering Models
Think Twice: Measuring the Efficiency of Eliminating Prediction Shortcuts of Question Answering Models
Lukávs Mikula
Michal vStefánik
Marek Petrovivc
Petr Sojka
33
3
0
11 May 2023
Davinci the Dualist: the mind-body divide in large language models and
  in human learners
Davinci the Dualist: the mind-body divide in large language models and in human learners
I. Berent
Alexzander Sansiveri
AI4CE
VLM
21
0
0
10 May 2023
Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language
  Models
Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models
Emilio Ferrara
SILM
27
247
0
07 Apr 2023
Pythia: A Suite for Analyzing Large Language Models Across Training and
  Scaling
Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling
Stella Biderman
Hailey Schoelkopf
Quentin G. Anthony
Herbie Bradley
Kyle O'Brien
...
USVSN Sai Prashanth
Edward Raff
Aviya Skowron
Lintang Sutawika
Oskar van der Wal
36
1,174
0
03 Apr 2023
She Elicits Requirements and He Tests: Software Engineering Gender Bias
  in Large Language Models
She Elicits Requirements and He Tests: Software Engineering Gender Bias in Large Language Models
Christoph Treude
Hideaki Hata
12
19
0
17 Mar 2023
MultiModal Bias: Introducing a Framework for Stereotypical Bias
  Assessment beyond Gender and Race in Vision Language Models
MultiModal Bias: Introducing a Framework for Stereotypical Bias Assessment beyond Gender and Race in Vision Language Models
Sepehr Janghorbani
Gerard de Melo
VLM
36
11
0
16 Mar 2023
Previous
12345
Next