0
0

Underwater Monocular Metric Depth Estimation: Real-World Benchmarks and Synthetic Fine-Tuning

Zijie Cai
Christopher Metzler
Main:8 Pages
3 Figures
Bibliography:3 Pages
3 Tables
Abstract

Monocular depth estimation has recently advanced to provide not only relative but also metric depth predictions. However, its reliability in underwater environments remains limited due to light attenuation and scattering, color distortion, turbidity, and the lack of high-quality metric ground-truth data. In this paper, we present a comprehensive benchmark of zero-shot and fine-tuned monocular metric depth estimation models on real-world underwater datasets with metric depth annotations, such as FLSea and SQUID. We evaluate a diverse set of state-of-the-art models across a range of underwater conditions with different ranges. Our results show that large-scale models trained on terrestrial (real or synthetic) data, while effective in in-air settings, perform poorly underwater due to significant domain shifts. To address this, we fine-tune Depth Anything V2 with a ViT-S backbone encoder on a synthetic underwater variant of the Hypersim dataset, which we generated using a physically based underwater image formation model. We demonstrate our fine-tuned model consistently improves performance across all benchmarks and outperforms baselines trained only on the clean in-air Hypersim dataset. Our study provides a detailed evaluation and visualization for monocular metric depth estimation in underwater scenes, highlighting the importance of domain adaptation and scale-aware supervision for achieving robust and generalizable metric depth predictions in challenging underwater environments for future research.

View on arXiv
@article{cai2025_2507.02148,
  title={ Underwater Monocular Metric Depth Estimation: Real-World Benchmarks and Synthetic Fine-Tuning },
  author={ Zijie Cai and Christopher Metzler },
  journal={arXiv preprint arXiv:2507.02148},
  year={ 2025 }
}
Comments on this paper