ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.15835
39
1

BARD-GS: Blur-Aware Reconstruction of Dynamic Scenes via Gaussian Splatting

20 March 2025
Yiren Lu
Yunlai Zhou
Disheng Liu
Tuo Liang
Yu Yin
    3DGS
ArXivPDFHTML
Abstract

3D Gaussian Splatting (3DGS) has shown remarkable potential for static scene reconstruction, and recent advancements have extended its application to dynamic scenes. However, the quality of reconstructions depends heavily on high-quality input images and precise camera poses, which are not that trivial to fulfill in real-world scenarios. Capturing dynamic scenes with handheld monocular cameras, for instance, typically involves simultaneous movement of both the camera and objects within a single exposure. This combined motion frequently results in image blur that existing methods cannot adequately handle. To address these challenges, we introduce BARD-GS, a novel approach for robust dynamic scene reconstruction that effectively handles blurry inputs and imprecise camera poses. Our method comprises two main components: 1) camera motion deblurring and 2) object motion deblurring. By explicitly decomposing motion blur into camera motion blur and object motion blur and modeling them separately, we achieve significantly improved rendering results in dynamic regions. In addition, we collect a real-world motion blur dataset of dynamic scenes to evaluate our approach. Extensive experiments demonstrate that BARD-GS effectively reconstructs high-quality dynamic scenes under realistic conditions, significantly outperforming existing methods.

View on arXiv
@article{lu2025_2503.15835,
  title={ BARD-GS: Blur-Aware Reconstruction of Dynamic Scenes via Gaussian Splatting },
  author={ Yiren Lu and Yunlai Zhou and Disheng Liu and Tuo Liang and Yu Yin },
  journal={arXiv preprint arXiv:2503.15835},
  year={ 2025 }
}
Comments on this paper