ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.00614
  4. Cited By
Multi-Dimensional Gender Bias Classification

Multi-Dimensional Gender Bias Classification

1 May 2020
Emily Dinan
Angela Fan
Ledell Yu Wu
Jason Weston
Douwe Kiela
Adina Williams
    FaML
ArXivPDFHTML

Papers citing "Multi-Dimensional Gender Bias Classification"

17 / 17 papers shown
Title
The Effect of Similarity Measures on Accurate Stability Estimates for Local Surrogate Models in Text-based Explainable AI
The Effect of Similarity Measures on Accurate Stability Estimates for Local Surrogate Models in Text-based Explainable AI
Christopher Burger
Charles Walter
Thai Le
AAML
146
1
0
20 Jan 2025
Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes
  in Emotion Attribution
Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution
Flor Miriam Plaza del Arco
A. C. Curry
Alba Curry
Gavin Abercrombie
Dirk Hovy
29
23
0
05 Mar 2024
Zero-Shot Robustification of Zero-Shot Models
Zero-Shot Robustification of Zero-Shot Models
Dyah Adila
Changho Shin
Lin Cai
Frederic Sala
36
18
0
08 Sep 2023
Uncovering and Quantifying Social Biases in Code Generation
Uncovering and Quantifying Social Biases in Code Generation
Y. Liu
Xiaokang Chen
Yan Gao
Zhe Su
Fengji Zhang
Daoguang Zan
Jian-Guang Lou
Pin-Yu Chen
Tsung-Yi Ho
36
19
0
24 May 2023
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language
  Models
This Prompt is Measuring <MASK>: Evaluating Bias Evaluation in Language Models
Seraphina Goldfarb-Tarrant
Eddie L. Ungless
Esma Balkir
Su Lin Blodgett
31
9
0
22 May 2023
Transcending the "Male Code": Implicit Masculine Biases in NLP Contexts
Transcending the "Male Code": Implicit Masculine Biases in NLP Contexts
Katie Seaborn
Shruti Chandra
Thibault Fabre
21
11
0
22 Apr 2023
Mind Your Bias: A Critical Review of Bias Detection Methods for
  Contextual Language Models
Mind Your Bias: A Critical Review of Bias Detection Methods for Contextual Language Models
Silke Husse
Andreas Spitz
20
6
0
15 Nov 2022
"I'm sorry to hear that": Finding New Biases in Language Models with a
  Holistic Descriptor Dataset
"I'm sorry to hear that": Finding New Biases in Language Models with a Holistic Descriptor Dataset
Eric Michael Smith
Melissa Hall
Melanie Kambadur
Eleonora Presani
Adina Williams
65
129
0
18 May 2022
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in
  Pretrained Language Models
Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models
Pieter Delobelle
E. Tokpo
T. Calders
Bettina Berendt
19
25
0
14 Dec 2021
CO-STAR: Conceptualisation of Stereotypes for Analysis and Reasoning
CO-STAR: Conceptualisation of Stereotypes for Analysis and Reasoning
Teyun Kwon
Anandha Gopalan
22
2
0
01 Dec 2021
A Word on Machine Ethics: A Response to Jiang et al. (2021)
A Word on Machine Ethics: A Response to Jiang et al. (2021)
Zeerak Talat
Hagen Blix
Josef Valvoda
M. I. Ganesh
Ryan Cotterell
Adina Williams
SyDa
FaML
96
39
0
07 Nov 2021
An Empirical Survey of the Effectiveness of Debiasing Techniques for
  Pre-trained Language Models
An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models
Nicholas Meade
Elinor Poole-Dayan
Siva Reddy
22
123
0
16 Oct 2021
Automatically Exposing Problems with Neural Dialog Models
Automatically Exposing Problems with Neural Dialog Models
Dian Yu
Kenji Sagae
23
9
0
14 Sep 2021
Anticipating Safety Issues in E2E Conversational AI: Framework and
  Tooling
Anticipating Safety Issues in E2E Conversational AI: Framework and Tooling
Emily Dinan
Gavin Abercrombie
A. S. Bergman
Shannon L. Spruit
Dirk Hovy
Y-Lan Boureau
Verena Rieser
32
105
0
07 Jul 2021
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked
  Language Models
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Nikita Nangia
Clara Vania
Rasika Bhalerao
Samuel R. Bowman
6
641
0
30 Sep 2020
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
Tianlu Wang
Xi Victoria Lin
Nazneen Rajani
Bryan McCann
Vicente Ordonez
Caimng Xiong
CVBM
151
54
0
03 May 2020
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Emily Dinan
Angela Fan
Adina Williams
Jack Urbanek
Douwe Kiela
Jason Weston
22
205
0
10 Nov 2019
1