ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.13272
85
2

Generative Gaussian Splatting: Generating 3D Scenes with Video Diffusion Priors

17 March 2025
Katja Schwarz
Norman Mueller
Peter Kontschieder
    3DGS
ArXivPDFHTML
Abstract

Synthesizing consistent and photorealistic 3D scenes is an open problem in computer vision. Video diffusion models generate impressive videos but cannot directly synthesize 3D representations, i.e., lack 3D consistency in the generated sequences. In addition, directly training generative 3D models is challenging due to a lack of 3D training data at scale. In this work, we present Generative Gaussian Splatting (GGS) -- a novel approach that integrates a 3D representation with a pre-trained latent video diffusion model. Specifically, our model synthesizes a feature field parameterized via 3D Gaussian primitives. The feature field is then either rendered to feature maps and decoded into multi-view images, or directly upsampled into a 3D radiance field. We evaluate our approach on two common benchmark datasets for scene synthesis, RealEstate10K and ScanNet+, and find that our proposed GGS model significantly improves both the 3D consistency of the generated multi-view images, and the quality of the generated 3D scenes over all relevant baselines. Compared to a similar model without 3D representation, GGS improves FID on the generated 3D scenes by ~20% on both RealEstate10K and ScanNet+. Project page:this https URL

View on arXiv
@article{schwarz2025_2503.13272,
  title={ Generative Gaussian Splatting: Generating 3D Scenes with Video Diffusion Priors },
  author={ Katja Schwarz and Norman Mueller and Peter Kontschieder },
  journal={arXiv preprint arXiv:2503.13272},
  year={ 2025 }
}
Comments on this paper