ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.06695
49
0

FairDropout: Using Example-Tied Dropout to Enhance Generalization of Minority Groups

10 February 2025
Géraldin Nanfack
Eugene Belilovsky
ArXivPDFHTML
Abstract

Deep learning models frequently exploit spurious features in training data to achieve low training error, often resulting in poor generalization when faced with shifted testing distributions. To address this issue, various methods from imbalanced learning, representation learning, and classifier recalibration have been proposed to enhance the robustness of deep neural networks against spurious correlations. In this paper, we observe that models trained with empirical risk minimization tend to generalize well for examples from the majority groups while memorizing instances from minority groups. Building on recent findings that show memorization can be localized to a limited number of neurons, we apply example-tied dropout as a method we term FairDropout, aimed at redirecting this memorization to specific neurons that we subsequently drop out during inference. We empirically evaluate FairDropout using the subpopulation benchmark suite encompassing vision, language, and healthcare tasks, demonstrating that it significantly reduces reliance on spurious correlations, and outperforms state-of-the-art methods.

View on arXiv
@article{nanfack2025_2502.06695,
  title={ FairDropout: Using Example-Tied Dropout to Enhance Generalization of Minority Groups },
  author={ Geraldin Nanfack and Eugene Belilovsky },
  journal={arXiv preprint arXiv:2502.06695},
  year={ 2025 }
}
Comments on this paper