ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.17851
19
0

ViewpointDepth: A New Dataset for Monocular Depth Estimation Under Viewpoint Shifts

26 September 2024
Aurel Pjetri
Stefano Caprasecca
Leonardo Taccari
Matteo Simoncini
Henrique Piñeiro Monteagudo
Walter Wallace
Douglas Coimbra de Andrade
Francesco Sambo
Andrew David Bagdanov
    MDE
ArXivPDFHTML
Abstract

Monocular depth estimation is a critical task for autonomous driving and many other computer vision applications. While significant progress has been made in this field, the effects of viewpoint shifts on depth estimation models remain largely underexplored. This paper introduces a novel dataset and evaluation methodology to quantify the impact of different camera positions and orientations on monocular depth estimation performance. We propose a ground truth strategy based on homography estimation and object detection, eliminating the need for expensive LIDAR sensors. We collect a diverse dataset of road scenes from multiple viewpoints and use it to assess the robustness of a modern depth estimation model to geometric shifts. After assessing the validity of our strategy on a public dataset, we provide valuable insights into the limitations of current models and highlight the importance of considering viewpoint variations in real-world applications.

View on arXiv
@article{pjetri2025_2409.17851,
  title={ ViewpointDepth: A New Dataset for Monocular Depth Estimation Under Viewpoint Shifts },
  author={ Aurel Pjetri and Stefano Caprasecca and Leonardo Taccari and Matteo Simoncini and Henrique Piñeiro Monteagudo and Wallace Walter and Douglas Coimbra de Andrade and Francesco Sambo and Andrew David Bagdanov },
  journal={arXiv preprint arXiv:2409.17851},
  year={ 2025 }
}
Comments on this paper