Fair Play for Individuals, Foul Play for Groups? Auditing Anonymization's Impact on ML Fairness

Machine learning (ML) algorithms are heavily based on the availability of training data, which, depending on the domain, often includes sensitive information about data providers. This raises critical privacy concerns. Anonymization techniques have emerged as a practical solution to address these issues by generalizing features or suppressing data to make it more difficult to accurately identify individuals. Although recent studies have shown that privacy-enhancing technologies can influence ML predictions across different subgroups, thus affecting fair decision-making, the specific effects of anonymization techniques, such as -anonymity, -diversity, and -closeness, on ML fairness remain largely unexplored. In this work, we systematically audit the impact of anonymization techniques on ML fairness, evaluating both individual and group fairness. Our quantitative study reveals that anonymization can degrade group fairness metrics by up to four orders of magnitude. Conversely, similarity-based individual fairness metrics tend to improve under stronger anonymization, largely as a result of increased input homogeneity. By analyzing varying levels of anonymization across diverse privacy settings and data distributions, this study provides critical insights into the trade-offs between privacy, fairness, and utility, offering actionable guidelines for responsible AI development. Our code is publicly available at:this https URL.
View on arXiv@article{arcolezi2025_2505.07985, title={ Fair Play for Individuals, Foul Play for Groups? Auditing Anonymization's Impact on ML Fairness }, author={ Héber H. Arcolezi and Mina Alishahi and Adda-Akram Bendoukha and Nesrine Kaaniche }, journal={arXiv preprint arXiv:2505.07985}, year={ 2025 } }