ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.14483
73
1

Multi-view Reconstruction via SfM-guided Monocular Depth Estimation

18 March 2025
Haoyu Guo
He Zhu
Sida Peng
Haotong Lin
Yunzhi Yan
Tao Xie
Wenguan Wang
Xiaowei Zhou
Hujun Bao
    3DV
    MDE
ArXivPDFHTML
Abstract

In this paper, we present a new method for multi-view geometric reconstruction. In recent years, large vision models have rapidly developed, performing excellently across various tasks and demonstrating remarkable generalization capabilities. Some works use large vision models for monocular depth estimation, which have been applied to facilitate multi-view reconstruction tasks in an indirect manner. Due to the ambiguity of the monocular depth estimation task, the estimated depth values are usually not accurate enough, limiting their utility in aiding multi-view reconstruction. We propose to incorporate SfM information, a strong multi-view prior, into the depth estimation process, thus enhancing the quality of depth prediction and enabling their direct application in multi-view geometric reconstruction. Experimental results on public real-world datasets show that our method significantly improves the quality of depth estimation compared to previous monocular depth estimation works. Additionally, we evaluate the reconstruction quality of our approach in various types of scenes including indoor, streetscape, and aerial views, surpassing state-of-the-art MVS methods. The code and supplementary materials are available atthis https URL.

View on arXiv
@article{guo2025_2503.14483,
  title={ Multi-view Reconstruction via SfM-guided Monocular Depth Estimation },
  author={ Haoyu Guo and He Zhu and Sida Peng and Haotong Lin and Yunzhi Yan and Tao Xie and Wenguan Wang and Xiaowei Zhou and Hujun Bao },
  journal={arXiv preprint arXiv:2503.14483},
  year={ 2025 }
}
Comments on this paper