10
0

VoteSplat: Hough Voting Gaussian Splatting for 3D Scene Understanding

Minchao Jiang
Shunyu Jia
Jiaming Gu
Xiaoyuan Lu
Guangming Zhu
Anqi Dong
Liang Zhang
Main:8 Pages
9 Figures
Bibliography:3 Pages
3 Tables
Abstract

3D Gaussian Splatting (3DGS) has become horsepower in high-quality, real-time rendering for novel view synthesis of 3D scenes. However, existing methods focus primarily on geometric and appearance modeling, lacking deeper scene understanding while also incurring high training costs that complicate the originally streamlined differentiable rendering pipeline. To this end, we propose VoteSplat, a novel 3D scene understanding framework that integrates Hough voting with 3DGS. Specifically, Segment Anything Model (SAM) is utilized for instance segmentation, extracting objects, and generating 2D vote maps. We then embed spatial offset vectors into Gaussian primitives. These offsets construct 3D spatial votes by associating them with 2D image votes, while depth distortion constraints refine localization along the depth axis. For open-vocabulary object localization, VoteSplat maps 2D image semantics to 3D point clouds via voting points, reducing training costs associated with high-dimensional CLIP features while preserving semantic unambiguity. Extensive experiments demonstrate effectiveness of VoteSplat in open-vocabulary 3D instance localization, 3D point cloud understanding, click-based 3D object localization, hierarchical segmentation, and ablation studies. Our code is available atthis https URL

View on arXiv
@article{jiang2025_2506.22799,
  title={ VoteSplat: Hough Voting Gaussian Splatting for 3D Scene Understanding },
  author={ Minchao Jiang and Shunyu Jia and Jiaming Gu and Xiaoyuan Lu and Guangming Zhu and Anqi Dong and Liang Zhang },
  journal={arXiv preprint arXiv:2506.22799},
  year={ 2025 }
}
Comments on this paper