ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.07676
60
19
v1v2v3 (latest)

A Possibility in Algorithmic Fairness: Calibrated Scores for Fair Classifications

18 February 2020
Claire Lazar Reich
Suhas Vijaykumar
    FaML
ArXiv (abs)PDFHTML
Abstract

Calibration and equal error rates are fundamental criteria of algorithmic fairness that have been shown to conflict with one another. This paper proves that they can be satisfied simultaneously in settings where decision-makers use risk scores to assign binary treatments. In particular, we derive necessary and sufficient conditions for the existence of calibrated scores that yield classifications achieving equal error rates. We then present an algorithm that searches for the most informative score subject to both calibration and minimal error rate disparity. Applied to a real criminal justice risk assessment, we show that our method can eliminate error disparities while maintaining calibration. In a separate application to credit lending, the procedure provides a solution that is more fair and profitable than a common alternative that omits sensitive features.

View on arXiv
Comments on this paper