ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.11190
42
0

Cross-Modal Learning for Music-to-Music-Video Description Generation

14 March 2025
Zhuoyuan Mao
Mengjie Zhao
Qiyu Wu
Zhi-Wei Zhong
Wei-Hsiang Liao
Hiromi Wakaki
Yuki Mitsufuji
    DiffM
    VGen
ArXivPDFHTML
Abstract

Music-to-music-video generation is a challenging task due to the intrinsic differences between the music and video modalities. The advent of powerful text-to-video diffusion models has opened a promising pathway for music-video (MV) generation by first addressing the music-to-MV description task and subsequently leveraging these models for video generation. In this study, we focus on the MV description generation task and propose a comprehensive pipeline encompassing training data construction and multimodal model fine-tuning. We fine-tune existing pre-trained multimodal models on our newly constructed music-to-MV description dataset based on the Music4All dataset, which integrates both musical and visual information. Our experimental results demonstrate that music representations can be effectively mapped to textual domains, enabling the generation of meaningful MV description directly from music inputs. We also identify key components in the dataset construction pipeline that critically impact the quality of MV description and highlight specific musical attributes that warrant greater focus for improved MV description generation.

View on arXiv
@article{mao2025_2503.11190,
  title={ Cross-Modal Learning for Music-to-Music-Video Description Generation },
  author={ Zhuoyuan Mao and Mengjie Zhao and Qiyu Wu and Zhi Zhong and Wei-Hsiang Liao and Hiromi Wakaki and Yuki Mitsufuji },
  journal={arXiv preprint arXiv:2503.11190},
  year={ 2025 }
}
Comments on this paper