ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.11773
24
0

TacoDepth: Towards Efficient Radar-Camera Depth Estimation with One-stage Fusion

16 April 2025
Y. Wang
J. Li
Chaoyi Hong
Ruibo Li
Liusheng Sun
Xiao-yang Song
Zhe Wang
Zhiguo Cao
Guosheng Lin
    MDE
ArXivPDFHTML
Abstract

Radar-Camera depth estimation aims to predict dense and accurate metric depth by fusing input images and Radar data. Model efficiency is crucial for this task in pursuit of real-time processing on autonomous vehicles and robotic platforms. However, due to the sparsity of Radar returns, the prevailing methods adopt multi-stage frameworks with intermediate quasi-dense depth, which are time-consuming and not robust. To address these challenges, we propose TacoDepth, an efficient and accurate Radar-Camera depth estimation model with one-stage fusion. Specifically, the graph-based Radar structure extractor and the pyramid-based Radar fusion module are designed to capture and integrate the graph structures of Radar point clouds, delivering superior model efficiency and robustness without relying on the intermediate depth results. Moreover, TacoDepth can be flexible for different inference modes, providing a better balance of speed and accuracy. Extensive experiments are conducted to demonstrate the efficacy of our method. Compared with the previous state-of-the-art approach, TacoDepth improves depth accuracy and processing speed by 12.8% and 91.8%. Our work provides a new perspective on efficient Radar-Camera depth estimation.

View on arXiv
@article{wang2025_2504.11773,
  title={ TacoDepth: Towards Efficient Radar-Camera Depth Estimation with One-stage Fusion },
  author={ Yiran Wang and Jiaqi Li and Chaoyi Hong and Ruibo Li and Liusheng Sun and Xiao Song and Zhe Wang and Zhiguo Cao and Guosheng Lin },
  journal={arXiv preprint arXiv:2504.11773},
  year={ 2025 }
}
Comments on this paper