ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.04120
51
1

Rethinking Fair Representation Learning for Performance-Sensitive Tasks

5 October 2024
Charles Jones
Fabio De Sousa Ribeiro
Mélanie Roschewitz
Daniel Coelho De Castro
Ben Glocker
    FaML
    OOD
    CML
ArXivPDFHTML
Abstract

We investigate the prominent class of fair representation learning methods for bias mitigation. Using causal reasoning to define and formalise different sources of dataset bias, we reveal important implicit assumptions inherent to these methods. We prove fundamental limitations on fair representation learning when evaluation data is drawn from the same distribution as training data and run experiments across a range of medical modalities to examine the performance of fair representation learning under distribution shifts. Our results explain apparent contradictions in the existing literature and reveal how rarely considered causal and statistical aspects of the underlying data affect the validity of fair representation learning. We raise doubts about current evaluation practices and the applicability of fair representation learning methods in performance-sensitive settings. We argue that fine-grained analysis of dataset biases should play a key role in the field moving forward.

View on arXiv
@article{jones2025_2410.04120,
  title={ Rethinking Fair Representation Learning for Performance-Sensitive Tasks },
  author={ Charles Jones and Fabio de Sousa Ribeiro and Mélanie Roschewitz and Daniel C. Castro and Ben Glocker },
  journal={arXiv preprint arXiv:2410.04120},
  year={ 2025 }
}
Comments on this paper