ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.17386
67
3

vesselFM: A Foundation Model for Universal 3D Blood Vessel Segmentation

26 November 2024
Bastian Wittmann
Yannick Wattenberg
Tamaz Amiranashvili
Suprosanna Shit
Bjoern H. Menze
ArXivPDFHTML
Abstract

Segmenting 3D blood vessels is a critical yet challenging task in medical image analysis. This is due to significant imaging modality-specific variations in artifacts, vascular patterns and scales, signal-to-noise ratios, and background tissues. These variations, along with domain gaps arising from varying imaging protocols, limit the generalization of existing supervised learning-based methods, requiring tedious voxel-level annotations for each dataset separately. While foundation models promise to alleviate this limitation, they typically fail to generalize to the task of blood vessel segmentation, posing a unique, complex problem. In this work, we present vesselFM, a foundation model designed specifically for the broad task of 3D blood vessel segmentation. Unlike previous models, vesselFM can effortlessly generalize to unseen domains. To achieve zero-shot generalization, we train vesselFM on three heterogeneous data sources: a large, curated annotated dataset, data generated by a domain randomization scheme, and data sampled from a flow matching-based generative model. Extensive evaluations show that vesselFM outperforms state-of-the-art medical image segmentation foundation models across four (pre-)clinically relevant imaging modalities in zero-, one-, and few-shot scenarios, therefore providing a universal solution for 3D blood vessel segmentation.

View on arXiv
@article{wittmann2025_2411.17386,
  title={ vesselFM: A Foundation Model for Universal 3D Blood Vessel Segmentation },
  author={ Bastian Wittmann and Yannick Wattenberg and Tamaz Amiranashvili and Suprosanna Shit and Bjoern Menze },
  journal={arXiv preprint arXiv:2411.17386},
  year={ 2025 }
}
Comments on this paper