ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.04420
  4. Cited By
Measuring machine learning harms from stereotypes: requires
  understanding who is being harmed by which errors in what ways

Measuring machine learning harms from stereotypes: requires understanding who is being harmed by which errors in what ways

6 February 2024
Angelina Wang
Xuechunzi Bai
Solon Barocas
Su Lin Blodgett
    FaML
ArXivPDFHTML

Papers citing "Measuring machine learning harms from stereotypes: requires understanding who is being harmed by which errors in what ways"

3 / 3 papers shown
Title
Identifying Fairness Issues in Automatically Generated Testing Content
Identifying Fairness Issues in Automatically Generated Testing Content
Kevin Stowe
Benny Longwill
Alyssa Francis
Tatsuya Aoyama
Debanjan Ghosh
Swapna Somasundaran
24
1
0
23 Apr 2024
A Systematic Study of Bias Amplification
A Systematic Study of Bias Amplification
Melissa Hall
L. V. D. van der Maaten
Laura Gustafson
Maxwell Jones
Aaron B. Adcock
92
70
0
27 Jan 2022
Capturing Ambiguity in Crowdsourcing Frame Disambiguation
Capturing Ambiguity in Crowdsourcing Frame Disambiguation
Anca Dumitrache
Lora Aroyo
Chris Welty
FedML
18
31
0
01 May 2018
1