ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.14548
17
0

VGNC: Reducing the Overfitting of Sparse-view 3DGS via Validation-guided Gaussian Number Control

20 April 2025
Lifeng Lin
Rongfeng Lu
Quan Chen
Haofan Ren
Ming Lu
Yaoqi Sun
Chenggang Yan
Anke Xue
    3DGS
ArXivPDFHTML
Abstract

Sparse-view 3D reconstruction is a fundamental yet challenging task in practical 3D reconstruction applications. Recently, many methods based on the 3D Gaussian Splatting (3DGS) framework have been proposed to address sparse-view 3D reconstruction. Although these methods have made considerable advancements, they still show significant issues with overfitting. To reduce the overfitting, we introduce VGNC, a novel Validation-guided Gaussian Number Control (VGNC) approach based on generative novel view synthesis (NVS) models. To the best of our knowledge, this is the first attempt to alleviate the overfitting issue of sparse-view 3DGS with generative validation images. Specifically, we first introduce a validation image generation method based on a generative NVS model. We then propose a Gaussian number control strategy that utilizes generated validation images to determine the optimal Gaussian numbers, thereby reducing the issue of overfitting. We conducted detailed experiments on various sparse-view 3DGS baselines and datasets to evaluate the effectiveness of VGNC. Extensive experiments show that our approach not only reduces overfitting but also improves rendering quality on the test set while decreasing the number of Gaussian points. This reduction lowers storage demands and accelerates both training and rendering. The code will be released.

View on arXiv
@article{lin2025_2504.14548,
  title={ VGNC: Reducing the Overfitting of Sparse-view 3DGS via Validation-guided Gaussian Number Control },
  author={ Lifeng Lin and Rongfeng Lu and Quan Chen and Haofan Ren and Ming Lu and Yaoqi Sun and Chenggang Yan and Anke Xue },
  journal={arXiv preprint arXiv:2504.14548},
  year={ 2025 }
}
Comments on this paper