ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.09591
40
45

3D Gaussian Splatting as Markov Chain Monte Carlo

15 April 2024
Shakiba Kheradmand
Daniel Rebain
Gopal Sharma
Weiwei Sun
Jeff Tseng
Hossam N. Isack
Abhishek Kar
Andrea Tagliasacchi
Kwang Moo Yi
    3DGS
ArXivPDFHTML
Abstract

While 3D Gaussian Splatting has recently become popular for neural rendering, current methods rely on carefully engineered cloning and splitting strategies for placing Gaussians, which can lead to poor-quality renderings, and reliance on a good initialization. In this work, we rethink the set of 3D Gaussians as a random sample drawn from an underlying probability distribution describing the physical representation of the scene-in other words, Markov Chain Monte Carlo (MCMC) samples. Under this view, we show that the 3D Gaussian updates can be converted as Stochastic Gradient Langevin Dynamics (SGLD) updates by simply introducing noise. We then rewrite the densification and pruning strategies in 3D Gaussian Splatting as simply a deterministic state transition of MCMC samples, removing these heuristics from the framework. To do so, we revise the 'cloning' of Gaussians into a relocalization scheme that approximately preserves sample probability. To encourage efficient use of Gaussians, we introduce a regularizer that promotes the removal of unused Gaussians. On various standard evaluation scenes, we show that our method provides improved rendering quality, easy control over the number of Gaussians, and robustness to initialization.

View on arXiv
@article{kheradmand2025_2404.09591,
  title={ 3D Gaussian Splatting as Markov Chain Monte Carlo },
  author={ Shakiba Kheradmand and Daniel Rebain and Gopal Sharma and Weiwei Sun and Jeff Tseng and Hossam Isack and Abhishek Kar and Andrea Tagliasacchi and Kwang Moo Yi },
  journal={arXiv preprint arXiv:2404.09591},
  year={ 2025 }
}
Comments on this paper