ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.02351
40
0

Agglomerating Large Vision Encoders via Distillation for VFSS Segmentation

3 April 2025
Chengxi Zeng
Yuxuan Jiang
Fan Zhang
A. Gambaruto
T. Burghardt
    MedIm
ArXivPDFHTML
Abstract

The deployment of foundation models for medical imaging has demonstrated considerable success. However, their training overheads associated with downstream tasks remain substantial due to the size of the image encoders employed, and the inference complexity is also significantly high. Although lightweight variants have been obtained for these foundation models, their performance is constrained by their limited model capacity and suboptimal training strategies. In order to achieve an improved tradeoff between complexity and performance, we propose a new framework to improve the performance of low complexity models via knowledge distillation from multiple large medical foundation models (e.g., MedSAM, RAD-DINO, MedCLIP), each specializing in different vision tasks, with the goal to effectively bridge the performance gap for medical image segmentation tasks. The agglomerated model demonstrates superior generalization across 12 segmentation tasks, whereas specialized models require explicit training for each task. Our approach achieved an average performance gain of 2\% in Dice coefficient compared to simple distillation.

View on arXiv
@article{zeng2025_2504.02351,
  title={ Agglomerating Large Vision Encoders via Distillation for VFSS Segmentation },
  author={ Chengxi Zeng and Yuxuan Jiang and Fan Zhang and Alberto Gambaruto and Tilo Burghardt },
  journal={arXiv preprint arXiv:2504.02351},
  year={ 2025 }
}
Comments on this paper