ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.00400
96
0
v1v2v3 (latest)

Formalising Anti-Discrimination Law in Automated Decision Systems

29 June 2024
Holli Sargeant
Måns Magnusson
    FaML
ArXiv (abs)PDFHTML
Main:8 Pages
Bibliography:5 Pages
Appendix:1 Pages
Abstract

Algorithmic discrimination is a critical concern as machine learning models are used in high-stakes decision-making in legally protected contexts. Although substantial research on algorithmic bias and discrimination has led to the development of fairness metrics, several critical legal issues remain unaddressed in practice. To address these gaps, we introduce a novel decision-theoretic framework grounded in anti-discrimination law of the United Kingdom, which has global influence and aligns more closely with European and Commonwealth legal systems. We propose the 'conditional estimation parity' metric, which accounts for estimation error and the underlying data-generating process, aligning with legal standards. Through a real-world example based on an algorithmic credit discrimination case, we demonstrate the practical application of our formalism and provide insights for aligning fairness metrics with legal principles. Our approach bridges the divide between machine learning fairness metrics and anti-discrimination law, offering a legally grounded framework for developing non-discriminatory automated decision systems.

View on arXiv
@article{sargeant2025_2407.00400,
  title={ Formalising Anti-Discrimination Law in Automated Decision Systems },
  author={ Holli Sargeant and Måns Magnusson },
  journal={arXiv preprint arXiv:2407.00400},
  year={ 2025 }
}
Comments on this paper