ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.19409
  4. Cited By
Examining risks of racial biases in NLP tools for child protective
  services

Examining risks of racial biases in NLP tools for child protective services

30 May 2023
Anjalie Field
Amanda Coston
Nupoor Gandhi
Alexandra Chouldechova
Emily Putnam-Hornstein
David Steier
Yulia Tsvetkov
ArXivPDFHTML

Papers citing "Examining risks of racial biases in NLP tools for child protective services"

4 / 4 papers shown
Title
Identity-related Speech Suppression in Generative AI Content Moderation
Identity-related Speech Suppression in Generative AI Content Moderation
Oghenefejiro Isaacs Anigboro
Charlie M. Crawford
Danaë Metaxa
Sorelle A. Friedler
Sorelle A. Friedler
26
0
0
09 Sep 2024
OccuQuest: Mitigating Occupational Bias for Inclusive Large Language
  Models
OccuQuest: Mitigating Occupational Bias for Inclusive Large Language Models
Mingfeng Xue
Dayiheng Liu
Kexin Yang
Guanting Dong
Wenqiang Lei
Zheng Yuan
Chang Zhou
Jingren Zhou
LLMAG
22
2
0
25 Oct 2023
A Human-Centered Review of the Algorithms used within the U.S. Child
  Welfare System
A Human-Centered Review of the Algorithms used within the U.S. Child Welfare System
Devansh Saxena
Karla A. Badillo-Urquiola
Pamela J. Wisniewski
Shion Guha
64
106
0
07 Mar 2020
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
207
2,090
0
24 Oct 2016
1