1
0

Systematic Evaluation of Large Vision-Language Models for Surgical Artificial Intelligence

Anita Rau
Mark Endo
Josiah Aklilu
Jaewoo Heo
Khaled Saab
Alberto Paderno
Jeffrey Jopling
F. Christopher Holsinger
Serena Yeung-Levy
Abstract

Large Vision-Language Models offer a new paradigm for AI-driven image understanding, enabling models to perform tasks without task-specific training. This flexibility holds particular promise across medicine, where expert-annotated data is scarce. Yet, VLMs' practical utility in intervention-focused domains--especially surgery, where decision-making is subjective and clinical scenarios are variable--remains uncertain. Here, we present a comprehensive analysis of 11 state-of-the-art VLMs across 17 key visual understanding tasks in surgical AI--from anatomy recognition to skill assessment--using 13 datasets spanning laparoscopic, robotic, and open procedures. In our experiments, VLMs demonstrate promising generalizability, at times outperforming supervised models when deployed outside their training setting. In-context learning, incorporating examples during testing, boosted performance up to three-fold, suggesting adaptability as a key strength. Still, tasks requiring spatial or temporal reasoning remained difficult. Beyond surgery, our findings offer insights into VLMs' potential for tackling complex and dynamic scenarios in clinical and broader real-world applications.

View on arXiv
@article{rau2025_2504.02799,
  title={ Systematic Evaluation of Large Vision-Language Models for Surgical Artificial Intelligence },
  author={ Anita Rau and Mark Endo and Josiah Aklilu and Jaewoo Heo and Khaled Saab and Alberto Paderno and Jeffrey Jopling and F. Christopher Holsinger and Serena Yeung-Levy },
  journal={arXiv preprint arXiv:2504.02799},
  year={ 2025 }
}
Comments on this paper