ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.04321
37
2

VidMuse: A Simple Video-to-Music Generation Framework with Long-Short-Term Modeling

6 June 2024
Zeyue Tian
Zhaoyang Liu
Ruibin Yuan
Jiahao Pan
Xiaoqiang Huang
Xu Tan
Xu Tan
Qifeng Chen
Y. Guo
    VGen
ArXivPDFHTML
Abstract

In this work, we systematically study music generation conditioned solely on the video. First, we present a large-scale dataset comprising 360K video-music pairs, including various genres such as movie trailers, advertisements, and documentaries. Furthermore, we propose VidMuse, a simple framework for generating music aligned with video inputs. VidMuse stands out by producing high-fidelity music that is both acoustically and semantically aligned with the video. By incorporating local and global visual cues, VidMuse enables the creation of musically coherent audio tracks that consistently match the video content through Long-Short-Term modeling. Through extensive experiments, VidMuse outperforms existing models in terms of audio quality, diversity, and audio-visual alignment. The code and datasets are available atthis https URL.

View on arXiv
@article{tian2025_2406.04321,
  title={ VidMuse: A Simple Video-to-Music Generation Framework with Long-Short-Term Modeling },
  author={ Zeyue Tian and Zhaoyang Liu and Ruibin Yuan and Jiahao Pan and Qifeng Liu and Xu Tan and Qifeng Chen and Wei Xue and Yike Guo },
  journal={arXiv preprint arXiv:2406.04321},
  year={ 2025 }
}
Comments on this paper