ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.03501
23
82

Heidelberg Colorectal Data Set for Surgical Data Science in the Sensor Operating Room

7 May 2020
Lena Maier-Hein
M. Wagner
T. Ross
Annika Reinke
S. Bodenstedt
Peter M. Full
Hellena Hempe
D. Filimon
Patrick Scholz
T. Tran
Pierangela Bruno
A. Kisilenko
Benjamin Müller
Tornike Davitashvili
Manuela Capek
M. Tizabi
Matthias Eisenmann
T. Adler
J. Gröhl
Melanie Schellenberg
Silvia Seidlitz
T. Y. E. Lai
Bunyamin Pekdemir
Veith Roethlingshoefer
Fabian Both
Sebastian Bittel
M. Mengler
Lars Mundermann
M. Apitz
Annette Kopp-Schneider
Stefanie Speidel
H. Kenngott
Beat P. Müller-Stich
ArXivPDFHTML
Abstract

Image-based tracking of medical instruments is an integral part of surgical data science applications. Previous research has addressed the tasks of detecting, segmenting and tracking medical instruments based on laparoscopic video data. However, the proposed methods still tend to fail when applied to challenging images and do not generalize well to data they have not been trained on. This paper introduces the Heidelberg Colorectal (HeiCo) data set - the first publicly available data set enabling comprehensive benchmarking of medical instrument detection and segmentation algorithms with a specific emphasis on method robustness and generalization capabilities. Our data set comprises 30 laparoscopic videos and corresponding sensor data from medical devices in the operating room for three different types of laparoscopic surgery. Annotations include surgical phase labels for all video frames as well as information on instrument presence and corresponding instance-wise segmentation masks for surgical instruments (if any) in more than 10,000 individual frames. The data has successfully been used to organize international competitions within the Endoscopic Vision Challenges 2017 and 2019.

View on arXiv
Comments on this paper