ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.07958
37
0

Pre-trained Models Succeed in Medical Imaging with Representation Similarity Degradation

11 March 2025
Wenqiang Zu
Shenghao Xie
Hao Chen
Lei Ma
    MedIm
ArXivPDFHTML
Abstract

This paper investigates the critical problem of representation similarity evolution during cross-domain transfer learning, with particular focus on understanding why pre-trained models maintain effectiveness when adapted to medical imaging tasks despite significant domain gaps. The study establishes a rigorous problem definition centered on quantifying and analyzing representation similarity trajectories throughout the fine-tuning process, while carefully delineating the scope to encompass both medical image analysis and broader cross-domain adaptation scenarios. Our empirical findings reveal three critical discoveries: the potential existence of high-performance models that preserve both task accuracy and representation similarity to their pre-trained origins; a robust linear correlation between layer-wise similarity metrics and representation quality indicators; and distinct adaptation patterns that differentiate supervised versus self-supervised pre-training paradigms. The proposed similarity space framework not only provides mechanistic insights into knowledge transfer dynamics but also raises fundamental questions about optimal utilization of pre-trained models. These results advance our understanding of neural network adaptation processes while offering practical implications for transfer learning strategies that extend beyond medical imaging applications. The code will be available once accepted.

View on arXiv
@article{zu2025_2503.07958,
  title={ Pre-trained Models Succeed in Medical Imaging with Representation Similarity Degradation },
  author={ Wenqiang Zu and Shenghao Xie and Hao Chen and Lei Ma },
  journal={arXiv preprint arXiv:2503.07958},
  year={ 2025 }
}
Comments on this paper