36
0

On the Origins of Sampling Bias: Implications on Fairness Measurement and Mitigation

Abstract

Accurately measuring discrimination is crucial to faithfully assessing fairness of trained machine learning (ML) models. Any bias in measuring discrimination leads to either amplification or underestimation of the existing disparity. Several sources of bias exist and it is assumed that bias resulting from machine learning is born equally by different groups (e.g. females vs males, whites vs blacks, etc.). If, however, bias is born differently by different groups, it may exacerbate discrimination against specific sub-populations. Sampling bias, in particular, is inconsistently used in the literature to describe bias due to the sampling procedure. In this paper, we attempt to disambiguate this term by introducing clearly defined variants of sampling bias, namely, sample size bias (SSB) and underrepresentation bias (URB). Through an extensive set of experiments on benchmark datasets and using mainstream learning algorithms, we expose relevant observations in several model training scenarios. The observations are finally framed as actionable recommendations for practitioners.

View on arXiv
@article{zhioua2025_2503.17956,
  title={ On the Origins of Sampling Bias: Implications on Fairness Measurement and Mitigation },
  author={ Sami Zhioua and Ruta Binkyte and Ayoub Ouni and Farah Barika Ktata },
  journal={arXiv preprint arXiv:2503.17956},
  year={ 2025 }
}
Comments on this paper