ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13122
99
0

When majority rules, minority loses: bias amplification of gradient descent

19 May 2025
François Bachoc
Jérôme Bolte
Ryan Boustany
Jean-Michel Loubes
    FaML
ArXiv (abs)PDFHTML
Main:9 Pages
8 Figures
Bibliography:3 Pages
4 Tables
Appendix:16 Pages
Abstract

Despite growing empirical evidence of bias amplification in machine learning, its theoretical foundations remain poorly understood. We develop a formal framework for majority-minority learning tasks, showing how standard training can favor majority groups and produce stereotypical predictors that neglect minority-specific features. Assuming population and variance imbalance, our analysis reveals three key findings: (i) the close proximity between ``full-data'' and stereotypical predictors, (ii) the dominance of a region where training the entire model tends to merely learn the majority traits, and (iii) a lower bound on the additional training required. Our results are illustrated through experiments in deep learning for tabular and image classification tasks.

View on arXiv
@article{bachoc2025_2505.13122,
  title={ When majority rules, minority loses: bias amplification of gradient descent },
  author={ François Bachoc and Jérôme Bolte and Ryan Boustany and Jean-Michel Loubes },
  journal={arXiv preprint arXiv:2505.13122},
  year={ 2025 }
}
Comments on this paper