ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.19441
36
0

AniGaussian: Animatable Gaussian Avatar with Pose-guided Deformation

24 February 2025
Mengtian Li
Shengxiang Yao
Chen Kai
Zhifeng Xie
Keyu Chen
Yu-Gang Jiang
ArXivPDFHTML
Abstract

Recent advancements in Gaussian-based human body reconstruction have achieved notable success in creating animatable avatars. However, there are ongoing challenges to fully exploit the SMPL model's prior knowledge and enhance the visual fidelity of these models to achieve more refined avatar reconstructions. In this paper, we introduce AniGaussian which addresses the above issues with two insights. First, we propose an innovative pose guided deformation strategy that effectively constrains the dynamic Gaussian avatar with SMPL pose guidance, ensuring that the reconstructed model not only captures the detailed surface nuances but also maintains anatomical correctness across a wide range of motions. Second, we tackle the expressiveness limitations of Gaussian models in representing dynamic human bodies. We incorporate rigid-based priors from previous works to enhance the dynamic transform capabilities of the Gaussian model. Furthermore, we introduce a split-with-scale strategy that significantly improves geometry quality. The ablative study experiment demonstrates the effectiveness of our innovative model design. Through extensive comparisons with existing methods, AniGaussian demonstrates superior performance in both qualitative result and quantitative metrics.

View on arXiv
@article{li2025_2502.19441,
  title={ AniGaussian: Animatable Gaussian Avatar with Pose-guided Deformation },
  author={ Mengtian Li and Shengxiang Yao and Chen Kai and Zhifeng Xie and Keyu Chen and Yu-Gang Jiang },
  journal={arXiv preprint arXiv:2502.19441},
  year={ 2025 }
}
Comments on this paper