ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.19359
43
0

Show and Segment: Universal Medical Image Segmentation via In-Context Learning

25 March 2025
Yunhe Gao
Di Liu
Zhuowei Li
Y. Li
Dongdong Chen
Mu Zhou
Dimitris N. Metaxas
    VLM
ArXivPDFHTML
Abstract

Medical image segmentation remains challenging due to the vast diversity of anatomical structures, imaging modalities, and segmentation tasks. While deep learning has made significant advances, current approaches struggle to generalize as they require task-specific training or fine-tuning on unseen classes. We present Iris, a novel In-context Reference Image guided Segmentation framework that enables flexible adaptation to novel tasks through the use of reference examples without fine-tuning. At its core, Iris features a lightweight context task encoding module that distills task-specific information from reference context image-label pairs. This rich context embedding information is used to guide the segmentation of target objects. By decoupling task encoding from inference, Iris supports diverse strategies from one-shot inference and context example ensemble to object-level context example retrieval and in-context tuning. Through comprehensive evaluation across twelve datasets, we demonstrate that Iris performs strongly compared to task-specific models on in-distribution tasks. On seven held-out datasets, Iris shows superior generalization to out-of-distribution data and unseen classes. Further, Iris's task encoding module can automatically discover anatomical relationships across datasets and modalities, offering insights into medical objects without explicit anatomical supervision.

View on arXiv
@article{gao2025_2503.19359,
  title={ Show and Segment: Universal Medical Image Segmentation via In-Context Learning },
  author={ Yunhe Gao and Di Liu and Zhuowei Li and Yunsheng Li and Dongdong Chen and Mu Zhou and Dimitris N. Metaxas },
  journal={arXiv preprint arXiv:2503.19359},
  year={ 2025 }
}
Comments on this paper