1

Multi-Modal Decouple and Recouple Network for Robust 3D Object Detection

Rui Ding
Zhaonian Kuang
Yuzhe Ji
Meng Yang
Xinhu Zheng
Gang Hua
Main:10 Pages
9 Figures
Bibliography:3 Pages
Abstract

Multi-modal 3D object detection with bird's eye view (BEV) has achieved desired advances on benchmarks. Nonetheless, the accuracy may drop significantly in the real world due to data corruption such as sensor configurations for LiDAR and scene conditions for camera. One design bottleneck of previous models resides in the tightly coupling of multi-modal BEV features during fusion, which may degrade the overall system performance if one modality or both is corrupted. To mitigate, we propose a Multi-Modal Decouple and Recouple Network for robust 3D object detection under data corruption. Different modalities commonly share some high-level invariant features. We observe that these invariant features across modalities do not always fail simultaneously, because different types of data corruption affect each modality in distinctthis http URLinvariant features can be recovered across modalities for robust fusion under datathis http URLthis end, we explicitly decouple Camera/LiDAR BEV features into modality-invariant and modality-specific parts. It allows invariant features to compensate each other while mitigates the negative impact of a corrupted modality on thethis http URLthen recouple these features into three experts to handle different types of data corruption, respectively, i.e., LiDAR, camera, andthis http URLeach expert, we use modality-invariant features as robust information, while modality-specific features serve as athis http URL, we adaptively fuse the three experts to exact robust features for 3D object detection. For validation, we collect a benchmark with a large quantity of data corruption for LiDAR, camera, and both based on nuScenes. Our model is trained on clean nuScenes and tested on all types of data corruption. Our model consistently achieves the best accuracy on both corrupted and clean data compared to recent models.

View on arXiv
Comments on this paper