12
0

Depth Anything at Any Condition

Boyuan Sun
Modi Jin
Bowen Yin
Qibin Hou
Main:9 Pages
5 Figures
Bibliography:7 Pages
10 Tables
Appendix:7 Pages
Abstract

We present Depth Anything at Any Condition (DepthAnything-AC), a foundation monocular depth estimation (MDE) model capable of handling diverse environmental conditions. Previous foundation MDE models achieve impressive performance across general scenes but not perform well in complex open-world environments that involve challenging conditions, such as illumination variations, adverse weather, and sensor-induced distortions. To overcome the challenges of data scarcity and the inability of generating high-quality pseudo-labels from corrupted images, we propose an unsupervised consistency regularization finetuning paradigm that requires only a relatively small amount of unlabeled data. Furthermore, we propose the Spatial Distance Constraint to explicitly enforce the model to learn patch-level relative relationships, resulting in clearer semantic boundaries and more accurate details. Experimental results demonstrate the zero-shot capabilities of DepthAnything-AC across diverse benchmarks, including real-world adverse weather benchmarks, synthetic corruption benchmarks, and general benchmarks.Project Page:this https URLCode:this https URL

View on arXiv
@article{sun2025_2507.01634,
  title={ Depth Anything at Any Condition },
  author={ Boyuan Sun and Modi Jin and Bowen Yin and Qibin Hou },
  journal={arXiv preprint arXiv:2507.01634},
  year={ 2025 }
}
Comments on this paper