ADGaussian: Generalizable Gaussian Splatting for Autonomous Driving with Multi-modal Inputs

We present a novel approach, termed ADGaussian, for generalizable street scene reconstruction. The proposed method enables high-quality rendering from single-view input. Unlike prior Gaussian Splatting methods that primarily focus on geometry refinement, we emphasize the importance of joint optimization of image and depth features for accurate Gaussian prediction. To this end, we first incorporate sparse LiDAR depth as an additional input modality, formulating the Gaussian prediction process as a joint learning framework of visual information and geometric clue. Furthermore, we propose a multi-modal feature matching strategy coupled with a multi-scale Gaussian decoding model to enhance the joint refinement of multi-modal features, thereby enabling efficient multi-modal Gaussian learning. Extensive experiments on two large-scale autonomous driving datasets, Waymo and KITTI, demonstrate that our ADGaussian achieves state-of-the-art performance and exhibits superior zero-shot generalization capabilities in novel-view shifting.
View on arXiv@article{song2025_2504.00437, title={ ADGaussian: Generalizable Gaussian Splatting for Autonomous Driving with Multi-modal Inputs }, author={ Qi Song and Chenghong Li and Haotong Lin and Sida Peng and Rui Huang }, journal={arXiv preprint arXiv:2504.00437}, year={ 2025 } }