ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.20176
52
1

DGFM: Full Body Dance Generation Driven by Music Foundation Models

27 February 2025
Xinran Liu
Zhenhua Feng
Diptesh Kanojia
Wenwu Wang
    DiffM
ArXivPDFHTML
Abstract

In music-driven dance motion generation, most existing methods use hand-crafted features and neglect that music foundation models have profoundly impacted cross-modal content generation. To bridge this gap, we propose a diffusion-based method that generates dance movements conditioned on text and music. Our approach extracts music features by combining high-level features obtained by music foundation model with hand-crafted features, thereby enhancing the quality of generated dance sequences. This method effectively leverages the advantages of high-level semantic information and low-level temporal details to improve the model's capability in music feature understanding. To show the merits of the proposed method, we compare it with four music foundation models and two sets of hand-crafted music features. The results demonstrate that our method obtains the most realistic dance sequences and achieves the best match with the input music.

View on arXiv
@article{liu2025_2502.20176,
  title={ DGFM: Full Body Dance Generation Driven by Music Foundation Models },
  author={ Xinran Liu and Zhenhua Feng and Diptesh Kanojia and Wenwu Wang },
  journal={arXiv preprint arXiv:2502.20176},
  year={ 2025 }
}
Comments on this paper