ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19264
97
0

Improving Novel view synthesis of 360∘^\circ∘ Scenes in Extremely Sparse Views by Jointly Training Hemisphere Sampled Synthetic Images

25 May 2025
Guangan Chen
A. Truong
Hanhe Lin
M. Vlaminck
Wilfried Philips
H. Luong
    3DGS
ArXiv (abs)PDFHTML
Main:5 Pages
3 Figures
Bibliography:1 Pages
4 Tables
Abstract

Novel view synthesis in 360∘^\circ∘ scenes from extremely sparse input views is essential for applications like virtual reality and augmented reality. This paper presents a novel framework for novel view synthesis in extremely sparse-view cases. As typical structure-from-motion methods are unable to estimate camera poses in extremely sparse-view cases, we apply DUSt3R to estimate camera poses and generate a dense point cloud. Using the poses of estimated cameras, we densely sample additional views from the upper hemisphere space of the scenes, from which we render synthetic images together with the point cloud. Training 3D Gaussian Splatting model on a combination of reference images from sparse views and densely sampled synthetic images allows a larger scene coverage in 3D space, addressing the overfitting challenge due to the limited input in sparse-view cases. Retraining a diffusion-based image enhancement model on our created dataset, we further improve the quality of the point-cloud-rendered images by removing artifacts. We compare our framework with benchmark methods in cases of only four input views, demonstrating significant improvement in novel view synthesis under extremely sparse-view conditions for 360∘^\circ∘ scenes.

View on arXiv
Comments on this paper