ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.11642
61
0

GaussianMotion: End-to-End Learning of Animatable Gaussian Avatars with Pose Guidance from Text

17 February 2025
Gyumin Shim
Sangmin Lee
Jaegul Choo
    3DGS
ArXivPDFHTML
Abstract

In this paper, we introduce GaussianMotion, a novel human rendering model that generates fully animatable scenes aligned with textual descriptions using Gaussian Splatting. Although existing methods achieve reasonable text-to-3D generation of human bodies using various 3D representations, they often face limitations in fidelity and efficiency, or primarily focus on static models with limited pose control. In contrast, our method generates fully animatable 3D avatars by combining deformable 3D Gaussian Splatting with text-to-3D score distillation, achieving high fidelity and efficient rendering for arbitrary poses. By densely generating diverse random poses during optimization, our deformable 3D human model learns to capture a wide range of natural motions distilled from a pose-conditioned diffusion model in an end-to-end manner. Furthermore, we propose Adaptive Score Distillation that effectively balances realistic detail and smoothness to achieve optimal 3D results. Experimental results demonstrate that our approach outperforms existing baselines by producing high-quality textures in both static and animated results, and by generating diverse 3D human models from various textual inputs.

View on arXiv
@article{shim2025_2502.11642,
  title={ GaussianMotion: End-to-End Learning of Animatable Gaussian Avatars with Pose Guidance from Text },
  author={ Gyumin Shim and Sangmin Lee and Jaegul Choo },
  journal={arXiv preprint arXiv:2502.11642},
  year={ 2025 }
}
Comments on this paper