ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.01204
23
0

Articulated Kinematics Distillation from Video Diffusion Models

1 April 2025
Xuan Li
Qianli Ma
Tsung-Yi Lin
Yongxin Chen
Chenfanfu Jiang
Ming-Yu Liu
Donglai Xiang
    VGen
ArXivPDFHTML
Abstract

We present Articulated Kinematics Distillation (AKD), a framework for generating high-fidelity character animations by merging the strengths of skeleton-based animation and modern generative models. AKD uses a skeleton-based representation for rigged 3D assets, drastically reducing the Degrees of Freedom (DoFs) by focusing on joint-level control, which allows for efficient, consistent motion synthesis. Through Score Distillation Sampling (SDS) with pre-trained video diffusion models, AKD distills complex, articulated motions while maintaining structural integrity, overcoming challenges faced by 4D neural deformation fields in preserving shape consistency. This approach is naturally compatible with physics-based simulation, ensuring physically plausible interactions. Experiments show that AKD achieves superior 3D consistency and motion quality compared with existing works on text-to-4D generation. Project page:this https URL

View on arXiv
@article{li2025_2504.01204,
  title={ Articulated Kinematics Distillation from Video Diffusion Models },
  author={ Xuan Li and Qianli Ma and Tsung-Yi Lin and Yongxin Chen and Chenfanfu Jiang and Ming-Yu Liu and Donglai Xiang },
  journal={arXiv preprint arXiv:2504.01204},
  year={ 2025 }
}
Comments on this paper