ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.21525
51
0

ICG-MVSNet: Learning Intra-view and Cross-view Relationships for Guidance in Multi-View Stereo

27 March 2025
Yihan Hu
Jun Zhang
Z. Zhang
Rafael Weilharter
Yuchen Rao
Kuangyi Chen
Runze Yuan
Friedrich Fraundorfer
    3DV
ArXivPDFHTML
Abstract

Multi-view Stereo (MVS) aims to estimate depth and reconstruct 3D point clouds from a series of overlapping images. Recent learning-based MVS frameworks overlook the geometric information embedded in features and correlations, leading to weak cost matching. In this paper, we propose ICG-MVSNet, which explicitly integrates intra-view and cross-view relationships for depth estimation. Specifically, we develop an intra-view feature fusion module that leverages the feature coordinate correlations within a single image to enhance robust cost matching. Additionally, we introduce a lightweight cross-view aggregation module that efficiently utilizes the contextual information from volume correlations to guide regularization. Our method is evaluated on the DTU dataset and Tanks and Temples benchmark, consistently achieving competitive performance against state-of-the-art works, while requiring lower computational resources.

View on arXiv
@article{hu2025_2503.21525,
  title={ ICG-MVSNet: Learning Intra-view and Cross-view Relationships for Guidance in Multi-View Stereo },
  author={ Yuxi Hu and Jun Zhang and Zhe Zhang and Rafael Weilharter and Yuchen Rao and Kuangyi Chen and Runze Yuan and Friedrich Fraundorfer },
  journal={arXiv preprint arXiv:2503.21525},
  year={ 2025 }
}
Comments on this paper