ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.17787
24
0

The Fourth Monocular Depth Estimation Challenge

24 April 2025
Anton Obukhov
Matteo Poggi
Fabio Tosi
Ripudaman Singh Arora
Jaime Spencer
Chris Russell
Simon Hadfield
Richard Bowden
S. Wang
Zhenxin Ma
Weijie Chen
Baobei Xu
Fengyu Sun
Di Xie
Jiang Zhu
M. Lavreniuk
Haining Guan
Qun Wu
Yupei Zeng
Chao Lu
H. Wang
GuangYuan Zhou
H. Zhang
J. Wang
Qiang Rao
Chunjie Wang
Xiao Liu
Zhiqiang Lou
Hualie Jiang
Y. Chen
Rui Xu
Minglang Tan
Zihan Qin
Yifan Mao
J. Liu
Jialei Xu
Y. Yang
Wenbo Zhao
Junjun Jiang
Xianming Liu
Mingshuai Zhao
Anlong Ming
W. Chen
Feng Xue
Mengying Yu
Shida Gao
X. Wang
Gbenga Omotara
Ramy M. A. Farag
Jacket Demby
Seyed Mohamad Ali Tousi
Guilherme N. DeSouza
Tuan-Anh Yang
Minh-Quang Nguyen
T. Tran
Albert Luginov
Muhammad Shahzad
    MDE
ArXivPDFHTML
Abstract

This paper presents the results of the fourth edition of the Monocular Depth Estimation Challenge (MDEC), which focuses on zero-shot generalization to the SYNS-Patches benchmark, a dataset featuring challenging environments in both natural and indoor settings. In this edition, we revised the evaluation protocol to use least-squares alignment with two degrees of freedom to support disparity and affine-invariant predictions. We also revised the baselines and included popular off-the-shelf methods: Depth Anything v2 and Marigold. The challenge received a total of 24 submissions that outperformed the baselines on the test set; 10 of these included a report describing their approach, with most leading methods relying on affine-invariant predictions. The challenge winners improved the 3D F-Score over the previous edition's best result, raising it from 22.58% to 23.05%.

View on arXiv
@article{obukhov2025_2504.17787,
  title={ The Fourth Monocular Depth Estimation Challenge },
  author={ Anton Obukhov and Matteo Poggi and Fabio Tosi and Ripudaman Singh Arora and Jaime Spencer and Chris Russell and Simon Hadfield and Richard Bowden and Shuaihang Wang and Zhenxin Ma and Weijie Chen and Baobei Xu and Fengyu Sun and Di Xie and Jiang Zhu and Mykola Lavreniuk and Haining Guan and Qun Wu and Yupei Zeng and Chao Lu and Huanran Wang and Guangyuan Zhou and Haotian Zhang and Jianxiong Wang and Qiang Rao and Chunjie Wang and Xiao Liu and Zhiqiang Lou and Hualie Jiang and Yihao Chen and Rui Xu and Minglang Tan and Zihan Qin and Yifan Mao and Jiayang Liu and Jialei Xu and Yifan Yang and Wenbo Zhao and Junjun Jiang and Xianming Liu and Mingshuai Zhao and Anlong Ming and Wu Chen and Feng Xue and Mengying Yu and Shida Gao and Xiangfeng Wang and Gbenga Omotara and Ramy Farag and Jacket Demby and Seyed Mohamad Ali Tousi and Guilherme N DeSouza and Tuan-Anh Yang and Minh-Quang Nguyen and Thien-Phuc Tran and Albert Luginov and Muhammad Shahzad },
  journal={arXiv preprint arXiv:2504.17787},
  year={ 2025 }
}
Comments on this paper