CRADMap: Applied Distributed Volumetric Mapping with 5G-Connected Multi-Robots and 4D Radar Perception
Sparse and feature SLAM methods provide robust camera pose estimation. However, they often fail to capture the level of detail required for inspection and scene awareness tasks. Conversely, dense SLAM approaches generate richer scene reconstructions but impose a prohibitive computational load to create 3D maps. We present a novel distributed volumetric mapping framework designated as CRADMap that addresses these issues by extending the state-of-the-art (SOTA) ORBSLAM3 system with the COVINS on the backend for global optimization. Our pipeline for volumetric reconstruction fuses dense keyframes at a centralized server via 5G connectivity, aggregating geometry, and occupancy information from multiple autonomous mobile robots (AMRs) without overtaxing onboard resources. This enables each AMR to independently perform mapping while the backend constructs high-fidelity real-time 3D maps. To operate Beyond the Visible (BtV) and overcome the limitations of standard visual sensors, we automated a standalone 4D mmWave radar module that functions independently without sensor fusion with SLAM. The BtV system enables the detection and mapping of occluded metallic objects in cluttered environments, enhancing situational awareness in inspection scenarios. Experimental validation in Section~\ref{sec:IV} demonstrates the effectiveness of our framework.
View on arXiv@article{qureshi2025_2503.00262, title={ CRADMap: Applied Distributed Volumetric Mapping with 5G-Connected Multi-Robots and 4D Radar Perception }, author={ Maaz Qureshi and Alexander Werner and Zhenan Liu and Amir Khajepour and George Shaker and William Melek }, journal={arXiv preprint arXiv:2503.00262}, year={ 2025 } }