ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.16841
38
0

Fair Foundation Models for Medical Image Analysis: Challenges and Perspectives

24 February 2025
Dilermando Queiroz
Anderson Carlos
André Anjos
Lilian Berton
ArXivPDFHTML
Abstract

Ensuring equitable Artificial Intelligence (AI) in healthcare demands systems that make unbiased decisions across all demographic groups, bridging technical innovation with ethical principles. Foundation Models (FMs), trained on vast datasets through self-supervised learning, enable efficient adaptation across medical imaging tasks while reducing dependency on labeled data. These models demonstrate potential for enhancing fairness, though significant challenges remain in achieving consistent performance across demographic groups. Our review indicates that effective bias mitigation in FMs requires systematic interventions throughout all stages of development. While previous approaches focused primarily on model-level bias mitigation, our analysis reveals that fairness in FMs requires integrated interventions throughout the development pipeline, from data documentation to deployment protocols. This comprehensive framework advances current knowledge by demonstrating how systematic bias mitigation, combined with policy engagement, can effectively address both technical and institutional barriers to equitable AI in healthcare. The development of equitable FMs represents a critical step toward democratizing advanced healthcare technologies, particularly for underserved populations and regions with limited medical infrastructure and computational resources.

View on arXiv
@article{queiroz2025_2502.16841,
  title={ Fair Foundation Models for Medical Image Analysis: Challenges and Perspectives },
  author={ Dilermando Queiroz and Anderson Carlos and André Anjos and Lilian Berton },
  journal={arXiv preprint arXiv:2502.16841},
  year={ 2025 }
}
Comments on this paper