ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1610.09559
81
56
v1v2v3v4 (latest)

Rawlsian Fairness for Machine Learning

29 October 2016
Matthew Joseph
Michael Kearns
Jamie Morgenstern
Seth Neel
Aaron Roth
    FedMLFaML
ArXiv (abs)PDFHTML
Abstract

Motivated by concerns that automated decision-making procedures can unintentionally lead to discriminatory behavior, we study a technical definition of fairness modeled after John Rawls' notion of "fair equality of opportunity". In the context of a simple model of online decision making, we give an algorithm that satisfies this fairness constraint, while still being able to learn at a rate that is comparable to (but necessarily worse than) that of the best algorithms absent a fairness constraint. We prove a regret bound for fair algorithms in the linear contextual bandit framework that is a significant improvement over our technical companion paper [16], which gives black-box reductions in a more general setting. We analyze our algorithms both theoretically and experimentally. Finally, we introduce the notion of a "discrimination index", and show that standard algorithms for our problem exhibit structured discriminatory behavior, whereas the "fair" algorithms we develop do not.

View on arXiv
Comments on this paper