ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.16638
51
0

Reconsidering Fairness Through Unawareness from the Perspective of Model Multiplicity

22 May 2025
Benedikt Höltgen
Nuria Oliver
    FaML
ArXiv (abs)PDFHTML
Main:16 Pages
9 Figures
Bibliography:2 Pages
4 Tables
Appendix:8 Pages
Abstract

Fairness through Unawareness (FtU) describes the idea that discrimination against demographic groups can be avoided by not considering group membership in the decisions or predictions. This idea has long been criticized in the machine learning literature as not being sufficient to ensure fairness. In addition, the use of additional features is typically thought to increase the accuracy of the predictions for all groups, so that FtU is sometimes thought to be detrimental to all groups. In this paper, we show both theoretically and empirically that FtU can reduce algorithmic discrimination without necessarily reducing accuracy. We connect this insight with the literature on Model Multiplicity, to which we contribute with novel theoretical and empirical results. Furthermore, we illustrate how, in a real-life application, FtU can contribute to the deployment of more equitable policies without losing efficacy. Our findings suggest that FtU is worth considering in practical applications, particularly in high-risk scenarios, and that the use of protected attributes such as gender in predictive models should be accompanied by a clear and well-founded justification.

View on arXiv
@article{höltgen2025_2505.16638,
  title={ Reconsidering Fairness Through Unawareness from the Perspective of Model Multiplicity },
  author={ Benedikt Höltgen and Nuria Oliver },
  journal={arXiv preprint arXiv:2505.16638},
  year={ 2025 }
}
Comments on this paper