50
0

U2-BENCH: Benchmarking Large Vision-Language Models on Ultrasound Understanding

Main:9 Pages
6 Figures
Bibliography:7 Pages
7 Tables
Appendix:32 Pages
Abstract

Ultrasound is a widely-used imaging modality critical to global healthcare, yet its interpretation remains challenging due to its varying image quality on operators, noises, and anatomical structures. Although large vision-language models (LVLMs) have demonstrated impressive multimodal capabilities across natural and medical domains, their performance on ultrasound remains largely unexplored. We introduce U2-BENCH, the first comprehensive benchmark to evaluate LVLMs on ultrasound understanding across classification, detection, regression, and text generation tasks. U2-BENCH aggregates 7,241 cases spanning 15 anatomical regions and defines 8 clinically inspired tasks, such as diagnosis, view recognition, lesion localization, clinical value estimation, and report generation, across 50 ultrasound application scenarios. We evaluate 20 state-of-the-art LVLMs, both open- and closed-source, general-purpose and medical-specific. Our results reveal strong performance on image-level classification, but persistent challenges in spatial reasoning and clinical language generation. U2-BENCH establishes a rigorous and unified testbed to assess and accelerate LVLM research in the uniquely multimodal domain of medical ultrasound imaging.

View on arXiv
@article{le2025_2505.17779,
  title={ U2-BENCH: Benchmarking Large Vision-Language Models on Ultrasound Understanding },
  author={ Anjie Le and Henan Liu and Yue Wang and Zhenyu Liu and Rongkun Zhu and Taohan Weng and Jinze Yu and Boyang Wang and Yalun Wu and Kaiwen Yan and Quanlin Sun and Meirui Jiang and Jialun Pei and Siya Liu and Haoyun Zheng and Zhoujun Li and Alison Noble and Jacques Souquet and Xiaoqing Guo and Manxi Lin and Hongcheng Guo },
  journal={arXiv preprint arXiv:2505.17779},
  year={ 2025 }
}
Comments on this paper