26
0

EVPGS: Enhanced View Prior Guidance for Splatting-based Extrapolated View Synthesis

Abstract

Gaussian Splatting (GS)-based methods rely on sufficient training view coverage and perform synthesis on interpolated views. In this work, we tackle the more challenging and underexplored Extrapolated View Synthesis (EVS) task. Here we enable GS-based models trained with limited view coverage to generalize well to extrapolated views. To achieve our goal, we propose a view augmentation framework to guide training through a coarse-to-fine process. At the coarse stage, we reduce rendering artifacts due to insufficient view coverage by introducing a regularization strategy at both appearance and geometry levels. At the fine stage, we generate reliable view priors to provide further training guidance. To this end, we incorporate an occlusion awareness into the view prior generation process, and refine the view priors with the aid of coarse stage output. We call our framework Enhanced View Prior Guidance for Splatting (EVPGS). To comprehensively evaluate EVPGS on the EVS task, we collect a real-world dataset called Merchandise3D dedicated to the EVS scenario. Experiments on three datasets including both real and synthetic demonstrate EVPGS achieves state-of-the-art performance, while improving synthesis quality at extrapolated views for GS-based methods both qualitatively and quantitatively. We will make our code, dataset, and models public.

View on arXiv
@article{li2025_2503.21816,
  title={ EVPGS: Enhanced View Prior Guidance for Splatting-based Extrapolated View Synthesis },
  author={ Jiahe Li and Feiyu Wang and Xiaochao Qu and Chengjing Wu and Luoqi Liu and Ting Liu },
  journal={arXiv preprint arXiv:2503.21816},
  year={ 2025 }
}
Comments on this paper