ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.03156
83
15
v1v2v3v4v5 (latest)

Wasserstein-based fairness interpretability framework for machine learning models

6 November 2020
A. Miroshnikov
Konstandinos Kotsiopoulos
Ryan Franks
Arjun Ravi Kannan
    FAtt
ArXiv (abs)PDFHTML
Abstract

In this article, we introduce a fairness interpretability framework for measuring and explaining bias in classification and regression models at the level of a distribution. In our work, motivated by the ideas of Dwork et al. (2012), we measure the model bias across sub-population distributions using the Wasserstein metric. The transport theory characterization of the Wasserstein metric allows us to take into account the sign of the bias across the model distribution which in turn yields the decomposition of the model bias into positive and negative components. To understand how predictors contribute to the model bias, we introduce and theoretically characterize bias predictor attributions called bias explanations and investigate their stability. We also provide the formulation for the bias explanations that take into account the impact of missing values. In addition, motivated by the works of \v{S}trumbelj and Kononenko (2014) and Lundberg and Lee (2017), we construct additive bias explanations by employing cooperative game theory and investigate their properties.

View on arXiv
Comments on this paper