ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.13335
67
0

Deblur-Avatar: Animatable Avatars from Motion-Blurred Monocular Videos

23 January 2025
Xianrui Luo
Juewen Peng
Zhongang Cai
Lei Yang
Fan Yang
Zhiguo Cao
Guosheng Lin
    VGen
ArXivPDFHTML
Abstract

We introduce a novel framework for modeling high-fidelity, animatable 3D human avatars from motion-blurred monocular video inputs. Motion blur is prevalent in real-world dynamic video capture, especially due to human movements in 3D human avatar modeling. Existing methods either (1) assume sharp image inputs, failing to address the detail loss introduced by motion blur, or (2) mainly consider blur by camera movements, neglecting the human motion blur which is more common in animatable avatars. Our proposed approach integrates a human movement-based motion blur model into 3D Gaussian Splatting (3DGS). By explicitly modeling human motion trajectories during exposure time, we jointly optimize the trajectories and 3D Gaussians to reconstruct sharp, high-quality human avatars. We employ a pose-dependent fusion mechanism to distinguish moving body regions, optimizing both blurred and sharp areas effectively. Extensive experiments on synthetic and real-world datasets demonstrate that our method significantly outperforms existing methods in rendering quality and quantitative metrics, producing sharp avatar reconstructions and enabling real-time rendering under challenging motion blur conditions.

View on arXiv
@article{luo2025_2501.13335,
  title={ Deblur-Avatar: Animatable Avatars from Motion-Blurred Monocular Videos },
  author={ Xianrui Luo and Juewen Peng and Zhongang Cai and Lei Yang and Fan Yang and Zhiguo Cao and Guosheng Lin },
  journal={arXiv preprint arXiv:2501.13335},
  year={ 2025 }
}
Comments on this paper