29
78

MonoDETR: Depth-guided Transformer for Monocular 3D Object Detection

Renrui Zhang
Han Qiu
Tai Wang
Ziyu Guo
Xuan Xu
Xuanzhuo Xu
Ziteng Cui
Peng Gao
Hongsheng Li
Hongsheng Li
Abstract

Monocular 3D object detection has long been a challenging task in autonomous driving. Most existing methods follow conventional 2D detectors to first localize object centers, and then predict 3D attributes by neighboring features. However, only using local visual features is insufficient to understand the scene-level 3D spatial structures and ignores the long-range inter-object depth relations. In this paper, we introduce the first DETR framework for Monocular DEtection with a depth-guided TRansformer, named MonoDETR. We modify the vanilla transformer to be depth-aware and guide the whole detection process by contextual depth cues. Specifically, concurrent to the visual encoder that captures object appearances, we introduce to predict a foreground depth map, and specialize a depth encoder to extract non-local depth embeddings. Then, we formulate 3D object candidates as learnable queries and propose a depth-guided decoder to conduct object-scene depth interactions. In this way, each object query estimates its 3D attributes adaptively from the depth-guided regions on the image and is no longer constrained to local visual features. On KITTI benchmark with monocular images as input, MonoDETR achieves state-of-the-art performance and requires no extra dense depth annotations. Besides, our depth-guided modules can also be plug-and-play to enhance multi-view 3D object detectors on nuScenes dataset, demonstrating our superior generalization capacity. Code is available atthis https URL.

View on arXiv
@article{zhang2025_2203.13310,
  title={ MonoDETR: Depth-guided Transformer for Monocular 3D Object Detection },
  author={ Renrui Zhang and Han Qiu and Tai Wang and Ziyu Guo and Yiwen Tang and Xuanzhuo Xu and Ziteng Cui and Yu Qiao and Peng Gao and Hongsheng Li },
  journal={arXiv preprint arXiv:2203.13310},
  year={ 2025 }
}
Comments on this paper