53
0

A Review of Fairness and A Practical Guide to Selecting Context-Appropriate Fairness Metrics in Machine Learning

Abstract

Recent regulatory proposals for artificial intelligence emphasize fairness requirements for machine learning models. However, precisely defining the appropriate measure of fairness is challenging due to philosophical, cultural and political contexts. Biases can infiltrate machine learning models in complex ways depending on the model's context, rendering a single common metric of fairness insufficient. This ambiguity highlights the need for criteria to guide the selection of context-aware measures, an issue of increasing importance given the proliferation of ever tighter regulatory requirements. To address this, we developed a flowchart to guide the selection of contextually appropriate fairness measures. Twelve criteria were used to formulate the flowchart. This included consideration of model assessment criteria, model selection criteria, and data bias. We also review fairness literature in the context of machine learning and link it to core regulatory instruments to assist policymakers, AI developers, researchers, and other stakeholders in appropriately addressing fairness concerns and complying with relevant regulatory requirements.

View on arXiv
@article{barr2025_2411.06624,
  title={ A Review of Fairness and A Practical Guide to Selecting Context-Appropriate Fairness Metrics in Machine Learning },
  author={ Caleb J.S. Barr and Olivia Erdelyi and Paul D. Docherty and Randolph C. Grace },
  journal={arXiv preprint arXiv:2411.06624},
  year={ 2025 }
}
Comments on this paper