Segment Anything Model (SAM) demonstrates powerful zero-shot capabilities; however, its accuracy and robustness significantly decrease when applied to medical image segmentation. Existing methods address this issue through modality fusion, integrating textual and image information to provide more detailed priors. In this study, we argue that the granularity of text and the domain gap affect the accuracy of the priors. Furthermore, the discrepancy between high-level abstract semantics and pixel-level boundary details in images can introduce noise into the fusion process. To address this, we propose Prior-Guided SAM (PG-SAM), which employs a fine-grained modality prior aligner to leverage specialized medical knowledge for better modality alignment. The core of our method lies in efficiently addressing the domain gap with fine-grained text from a medical LLM. Meanwhile, it also enhances the priors' quality after modality alignment, ensuring more accurate segmentation. In addition, our decoder enhances the model's expressive capabilities through multi-level feature fusion and iterative mask optimizer operations, supporting unprompted learning. We also propose a unified pipeline that effectively supplies high-quality semantic information to SAM. Extensive experiments on the Synapse dataset demonstrate that the proposed PG-SAM achieves state-of-the-art performance. Our code is released atthis https URL.
View on arXiv@article{zhong2025_2503.18227, title={ PG-SAM: Prior-Guided SAM with Medical for Multi-organ Segmentation }, author={ Yiheng Zhong and Zihong Luo and Chengzhi Liu and Feilong Tang and Zelin Peng and Ming Hu and Yingzhen Hu and Jionglong Su and Zongyuan Ge and Imran Razzak }, journal={arXiv preprint arXiv:2503.18227}, year={ 2025 } }