ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.01196
  4. Cited By
Soliciting Stakeholders' Fairness Notions in Child Maltreatment
  Predictive Systems

Soliciting Stakeholders' Fairness Notions in Child Maltreatment Predictive Systems

1 February 2021
H. Cheng
Logan Stapleton
Ruiqi Wang
Paige E Bullock
Alexandra Chouldechova
Zhiwei Steven Wu
Haiyi Zhu
    FaML
ArXivPDFHTML

Papers citing "Soliciting Stakeholders' Fairness Notions in Child Maltreatment Predictive Systems"

11 / 11 papers shown
Title
Laypeople's Attitudes Towards Fair, Affirmative, and Discriminatory Decision-Making Algorithms
Laypeople's Attitudes Towards Fair, Affirmative, and Discriminatory Decision-Making Algorithms
Gabriel Lima
Nina Grgic-Hlaca
Markus Langer
Yixin Zou
FaML
41
0
0
12 May 2025
EARN Fairness: Explaining, Asking, Reviewing, and Negotiating Artificial Intelligence Fairness Metrics Among Stakeholders
EARN Fairness: Explaining, Asking, Reviewing, and Negotiating Artificial Intelligence Fairness Metrics Among Stakeholders
Lin Luo
Yuri Nakao
Mathieu Chollet
Hiroya Inakoshi
Simone Stumpf
38
0
0
16 Jul 2024
Understanding Frontline Workers' and Unhoused Individuals' Perspectives
  on AI Used in Homeless Services
Understanding Frontline Workers' and Unhoused Individuals' Perspectives on AI Used in Homeless Services
Tzu-Sheng Kuo
Hong Shen
Jisoo Geum
N. Jones
Jason I. Hong
Haiyi Zhu
Kenneth Holstein
21
59
0
17 Mar 2023
A Human-Centric Take on Model Monitoring
A Human-Centric Take on Model Monitoring
Murtuza N. Shergadwala
Himabindu Lakkaraju
K. Kenthapadi
37
9
0
06 Jun 2022
Towards Responsible AI: A Design Space Exploration of Human-Centered
  Artificial Intelligence User Interfaces to Investigate Fairness
Towards Responsible AI: A Design Space Exploration of Human-Centered Artificial Intelligence User Interfaces to Investigate Fairness
Yuri Nakao
Lorenzo Strappelli
Simone Stumpf
A. Naseer
D. Regoli
Giulia Del Gamba
4
29
0
01 Jun 2022
Imagining new futures beyond predictive systems in child welfare: A
  qualitative study with impacted stakeholders
Imagining new futures beyond predictive systems in child welfare: A qualitative study with impacted stakeholders
Logan Stapleton
Min Hun Lee
Diana Qing
Mary-Frances Wright
Alexandra Chouldechova
Kenneth Holstein
Zhiwei Steven Wu
Haiyi Zhu
38
55
0
18 May 2022
Perspectives on Incorporating Expert Feedback into Model Updates
Perspectives on Incorporating Expert Feedback into Model Updates
Valerie Chen
Umang Bhatt
Hoda Heidari
Adrian Weller
Ameet Talwalkar
30
11
0
13 May 2022
Improving Human-AI Partnerships in Child Welfare: Understanding Worker
  Practices, Challenges, and Desires for Algorithmic Decision Support
Improving Human-AI Partnerships in Child Welfare: Understanding Worker Practices, Challenges, and Desires for Algorithmic Decision Support
Anna Kawakami
Venkatesh Sivaraman
H. Cheng
Logan Stapleton
Yanghuidi Cheng
Diana Qing
Adam Perer
Zhiwei Steven Wu
Haiyi Zhu
Kenneth Holstein
28
106
0
05 Apr 2022
How to Train a (Bad) Algorithmic Caseworker: A Quantitative
  Deconstruction of Risk Assessments in Child-Welfare
How to Train a (Bad) Algorithmic Caseworker: A Quantitative Deconstruction of Risk Assessments in Child-Welfare
Devansh Saxena
Charlie Repaci
Melanie Sage
Shion Guha
CML
16
16
0
11 Mar 2022
Improving fairness in machine learning systems: What do industry
  practitioners need?
Improving fairness in machine learning systems: What do industry practitioners need?
Kenneth Holstein
Jennifer Wortman Vaughan
Hal Daumé
Miroslav Dudík
Hanna M. Wallach
FaML
HAI
192
742
0
13 Dec 2018
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,082
0
24 Oct 2016
1