39
0

GSplatVNM: Point-of-View Synthesis for Visual Navigation Models Using Gaussian Splatting

Abstract

This paper presents a novel approach to image-goal navigation by integrating 3D Gaussian Splatting (3DGS) with Visual Navigation Models (VNMs), a method we refer to as GSplatVNM. VNMs offer a promising paradigm for image-goal navigation by guiding a robot through a sequence of point-of-view images without requiring metrical localization or environment-specific training. However, constructing a dense and traversable sequence of target viewpoints from start to goal remains a central challenge, particularly when the available image database is sparse. To address these challenges, we propose a 3DGS-based viewpoint synthesis framework for VNMs that synthesizes intermediate viewpoints to seamlessly bridge gaps in sparse data while significantly reducing storage overhead. Experimental results in a photorealistic simulator demonstrate that our approach not only enhances navigation efficiency but also exhibits robustness under varying levels of image database sparsity.

View on arXiv
@article{honda2025_2503.05152,
  title={ GSplatVNM: Point-of-View Synthesis for Visual Navigation Models Using Gaussian Splatting },
  author={ Kohei Honda and Takeshi Ishita and Yasuhiro Yoshimura and Ryo Yonetani },
  journal={arXiv preprint arXiv:2503.05152},
  year={ 2025 }
}
Comments on this paper