57
0

Hierarchical Context Transformer for Multi-level Semantic Scene Understanding

Abstract

A comprehensive and explicit understanding of surgical scenes plays a vital role in developing context-aware computer-assisted systems in the operating theatre. However, few works provide systematical analysis to enable hierarchical surgical scene understanding. In this work, we propose to represent the tasks set [phase recognition --> step recognition --> action and instrument detection] as multi-level semantic scene understanding (MSSU). For this target, we propose a novel hierarchical context transformer (HCT) network and thoroughly explore the relations across the different level tasks. Specifically, a hierarchical relation aggregation module (HRAM) is designed to concurrently relate entries inside multi-level interaction information and then augment task-specific features. To further boost the representation learning of the different tasks, inter-task contrastive learning (ICL) is presented to guide the model to learn task-wise features via absorbing complementary information from other tasks. Furthermore, considering the computational costs of the transformer, we propose HCT+ to integrate the spatial and temporal adapter to access competitive performance on substantially fewer tunable parameters. Extensive experiments on our cataract dataset and a publicly available endoscopic PSI-AVA dataset demonstrate the outstanding performance of our method, consistently exceeding the state-of-the-art methods by a large margin. The code is available atthis https URL.

View on arXiv
@article{hao2025_2502.15184,
  title={ Hierarchical Context Transformer for Multi-level Semantic Scene Understanding },
  author={ Luoying Hao and Yan Hu and Yang Yue and Li Wu and Huazhu Fu and Jinming Duan and Jiang Liu },
  journal={arXiv preprint arXiv:2502.15184},
  year={ 2025 }
}
Comments on this paper