Novel view synthesis of urban scenes is essential for autonomous driving-relatedthis http URLNeRF and 3DGS-based methods show promising results in achieving photorealistic renderings but require slow, per-scene optimization. We introduce EVolSplat, an efficient 3D Gaussian Splatting model for urban scenes that works in a feed-forward manner. Unlike existing feed-forward, pixel-aligned 3DGS methods, which often suffer from issues like multi-view inconsistencies and duplicated content, our approach predicts 3D Gaussians across multiple frames within a unified volume using a 3D convolutional network. This is achieved by initializing 3D Gaussians with noisy depth predictions, and then refining their geometric properties in 3D space and predicting color based on 2D textures. Our model also handles distant views and the sky with a flexible hemisphere background model. This enables us to perform fast, feed-forward reconstruction while achieving real-time rendering. Experimental evaluations on the KITTI-360 and Waymo datasets show that our method achieves state-of-the-art quality compared to existing feed-forward 3DGS- and NeRF-based methods.
View on arXiv@article{miao2025_2503.20168, title={ EVolSplat: Efficient Volume-based Gaussian Splatting for Urban View Synthesis }, author={ Sheng Miao and Jiaxin Huang and Dongfeng Bai and Xu Yan and Hongyu Zhou and Yue Wang and Bingbing Liu and Andreas Geiger and Yiyi Liao }, journal={arXiv preprint arXiv:2503.20168}, year={ 2025 } }