Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.04420
Cited By
Measuring machine learning harms from stereotypes: requires understanding who is being harmed by which errors in what ways
6 February 2024
Angelina Wang
Xuechunzi Bai
Solon Barocas
Su Lin Blodgett
FaML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Measuring machine learning harms from stereotypes: requires understanding who is being harmed by which errors in what ways"
3 / 3 papers shown
Title
Identifying Fairness Issues in Automatically Generated Testing Content
Kevin Stowe
Benny Longwill
Alyssa Francis
Tatsuya Aoyama
Debanjan Ghosh
Swapna Somasundaran
24
1
0
23 Apr 2024
A Systematic Study of Bias Amplification
Melissa Hall
L. V. D. van der Maaten
Laura Gustafson
Maxwell Jones
Aaron B. Adcock
92
70
0
27 Jan 2022
Capturing Ambiguity in Crowdsourcing Frame Disambiguation
Anca Dumitrache
Lora Aroyo
Chris Welty
FedML
18
31
0
01 May 2018
1