ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.07723
28
2

Advancing Depth Anything Model for Unsupervised Monocular Depth Estimation in Endoscopy

12 September 2024
Bojian Li
Bo Liu
Jinghua Yue
F. Zhou
Fugen Zhou
    MedIm
    MDE
ArXivPDFHTML
Abstract

Depth estimation is a cornerstone of 3D reconstruction and plays a vital role in minimally invasive endoscopic surgeries. However, most current depth estimation networks rely on traditional convolutional neural networks, which are limited in their ability to capture global information. Foundation models offer a promising approach to enhance depth estimation, but those models currently available are primarily trained on natural images, leading to suboptimal performance when applied to endoscopic images. In this work, we introduce a novel fine-tuning strategy for the Depth Anything Model and integrate it with an intrinsic-based unsupervised monocular depth estimation framework. Our approach includes a low-rank adaptation technique based on random vectors, which improves the model's adaptability to different scales. Additionally, we propose a residual block built on depthwise separable convolution to compensate for the transformer's limited ability to capture local features. Our experimental results on the SCARED dataset and Hamlyn dataset show that our method achieves state-of-the-art performance while minimizing the number of trainable parameters. Applying this method in minimally invasive endoscopic surgery can enhance surgeons' spatial awareness, thereby improving the precision and safety of the procedures.

View on arXiv
@article{li2025_2409.07723,
  title={ Advancing Depth Anything Model for Unsupervised Monocular Depth Estimation in Endoscopy },
  author={ Bojian Li and Bo Liu and Xinning Yao and Jinghua Yue and Fugen Zhou },
  journal={arXiv preprint arXiv:2409.07723},
  year={ 2025 }
}
Comments on this paper