ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.05550
  4. Cited By
Bias Against 93 Stigmatized Groups in Masked Language Models and
  Downstream Sentiment Classification Tasks

Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks

8 June 2023
Katelyn Mei
Sonia Fereidooni
Aylin Caliskan
ArXivPDFHTML

Papers citing "Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks"

7 / 7 papers shown
Title
Addressing Bias in Generative AI: Challenges and Research Opportunities in Information Management
Addressing Bias in Generative AI: Challenges and Research Opportunities in Information Management
Xiahua Wei
Naveen Kumar
Han Zhang
68
5
0
22 Jan 2025
Identity-related Speech Suppression in Generative AI Content Moderation
Identity-related Speech Suppression in Generative AI Content Moderation
Oghenefejiro Isaacs Anigboro
Charlie M. Crawford
Danaë Metaxa
Sorelle A. Friedler
Sorelle A. Friedler
23
0
0
09 Sep 2024
Exploring LGBTQ+ Bias in Generative AI Answers across Different Country
  and Religious Contexts
Exploring LGBTQ+ Bias in Generative AI Answers across Different Country and Religious Contexts
L. Vicsek
Anna Vancsó
Mike Zajko
Judit Takacs
29
0
0
03 Jul 2024
Uncovering Bias in Large Vision-Language Models at Scale with Counterfactuals
Uncovering Bias in Large Vision-Language Models at Scale with Counterfactuals
Phillip Howard
Kathleen C. Fraser
Anahita Bhiwandiwalla
S. Kiritchenko
52
9
0
30 May 2024
"I'm sorry to hear that": Finding New Biases in Language Models with a
  Holistic Descriptor Dataset
"I'm sorry to hear that": Finding New Biases in Language Models with a Holistic Descriptor Dataset
Eric Michael Smith
Melissa Hall
Melanie Kambadur
Eleonora Presani
Adina Williams
76
129
0
18 May 2022
Low Frequency Names Exhibit Bias and Overfitting in Contextualizing
  Language Models
Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models
Robert Wolfe
Aylin Caliskan
85
51
0
01 Oct 2021
Measuring and Improving Consistency in Pretrained Language Models
Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
263
346
0
01 Feb 2021
1