ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.12001
46
0

3D Gaussian Splatting against Moving Objects for High-Fidelity Street Scene Reconstruction

15 March 2025
Peizhen Zheng
Longfei Wei
Dongjing Jiang
Jianfei Zhang
    3DGS
ArXivPDFHTML
Abstract

The accurate reconstruction of dynamic street scenes is critical for applications in autonomous driving, augmented reality, and virtual reality. Traditional methods relying on dense point clouds and triangular meshes struggle with moving objects, occlusions, and real-time processing constraints, limiting their effectiveness in complex urban environments. While multi-view stereo and neural radiance fields have advanced 3D reconstruction, they face challenges in computational efficiency and handling scene dynamics. This paper proposes a novel 3D Gaussian point distribution method for dynamic street scene reconstruction. Our approach introduces an adaptive transparency mechanism that eliminates moving objects while preserving high-fidelity static scene details. Additionally, iterative refinement of Gaussian point distribution enhances geometric accuracy and texture representation. We integrate directional encoding with spatial position optimization to optimize storage and rendering efficiency, reducing redundancy while maintaining scene integrity. Experimental results demonstrate that our method achieves high reconstruction quality, improved rendering performance, and adaptability in large-scale dynamic environments. These contributions establish a robust framework for real-time, high-precision 3D reconstruction, advancing the practicality of dynamic scene modeling across multiple applications.

View on arXiv
@article{zheng2025_2503.12001,
  title={ 3D Gaussian Splatting against Moving Objects for High-Fidelity Street Scene Reconstruction },
  author={ Peizhen Zheng and Longfei Wei and Dongjing Jiang and Jianfei Zhang },
  journal={arXiv preprint arXiv:2503.12001},
  year={ 2025 }
}
Comments on this paper