ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.10299
31
43

Robust Medical Instrument Segmentation Challenge 2019

23 March 2020
T. Ross
Annika Reinke
Peter M. Full
M. Wagner
H. Kenngott
M. Apitz
Hellena Hempe
D. Filimon
Patrick Scholz
T. Tran
Pierangela Bruno
Pablo Arbelaez
Guibin Bian
S. Bodenstedt
J. Bolmgren
Laura Bravo-Sánchez
Huabin Chen
Cristina González
D. Guo
P. Halvorsen
Pheng-Ann Heng
Enes Hosgor
Z. Hou
Fabian Isensee
Debesh Jha
Tingting Jiang
Yueming Jin
K. Kirtaç
Sabrina Kletz
S. Leger
Zhixuan Li
Klaus H. Maier-Hein
Zhen-Liang Ni
Michael A. Riegler
Klaus Schoeffmann
Ruohua Shi
Stefanie Speidel
Michael Stenzel
Isabell Twick
Gutai Wang
Jiacheng Wang
Liansheng Wang
Lu Wang
Yujie Zhang
Yan-Jie Zhou
Lei Zhu
Manuel Wiesenfarth
A. Kopp-Schneider
Beat P. Müller-Stich
Lena Maier-Hein
ArXivPDFHTML
Abstract

Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions. While numerous methods for detecting, segmenting and tracking of medical instruments based on endoscopic video images have been proposed in the literature, key limitations remain to be addressed: Firstly, robustness, that is, the reliable performance of state-of-the-art methods when run on challenging images (e.g. in the presence of blood, smoke or motion artifacts). Secondly, generalization; algorithms trained for a specific intervention in a specific hospital should generalize to other interventions or institutions. In an effort to promote solutions for these limitations, we organized the Robust Medical Instrument Segmentation (ROBUST-MIS) challenge as an international benchmarking competition with a specific focus on the robustness and generalization capabilities of algorithms. For the first time in the field of endoscopic image processing, our challenge included a task on binary segmentation and also addressed multi-instance detection and segmentation. The challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures from three different types of surgery. The validation of the competing methods for the three tasks (binary segmentation, multi-instance detection and multi-instance segmentation) was performed in three different stages with an increasing domain gap between the training and the test data. The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap. While the average detection and segmentation quality of the best-performing algorithms is high, future research should concentrate on detection and segmentation of small, crossing, moving and transparent instrument(s) (parts).

View on arXiv
Comments on this paper