ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.07594
23
0

Extending Visual Dynamics for Video-to-Music Generation

10 April 2025
Xiaohao Liu
Teng Tu
Yunshan Ma
Tat-Seng Chua
    VGen
ArXivPDFHTML
Abstract

Music profoundly enhances video production by improving quality, engagement, and emotional resonance, sparking growing interest in video-to-music generation. Despite recent advances, existing approaches remain limited in specific scenarios or undervalue the visual dynamics. To address these limitations, we focus on tackling the complexity of dynamics and resolving temporal misalignment between video and music representations. To this end, we propose DyViM, a novel framework to enhance dynamics modeling for video-to-music generation. Specifically, we extract frame-wise dynamics features via a simplified motion encoder inherited from optical flow methods, followed by a self-attention module for aggregation within frames. These dynamic features are then incorporated to extend existing music tokens for temporal alignment. Additionally, high-level semantics are conveyed through a cross-attention mechanism, and an annealing tuning strategy benefits to fine-tune well-trained music decoders efficiently, therefore facilitating seamless adaptation. Extensive experiments demonstrate DyViM's superiority over state-of-the-art (SOTA) methods.

View on arXiv
@article{liu2025_2504.07594,
  title={ Extending Visual Dynamics for Video-to-Music Generation },
  author={ Xiaohao Liu and Teng Tu and Yunshan Ma and Tat-Seng Chua },
  journal={arXiv preprint arXiv:2504.07594},
  year={ 2025 }
}
Comments on this paper