ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.09219
24
0

Generation of Musical Timbres using a Text-Guided Diffusion Model

12 April 2025
Weixuan Yuan
Qadeer Khan
Vladimir Golkov
    DiffM
ArXivPDFHTML
Abstract

In recent years, text-to-audio systems have achieved remarkable success, enabling the generation of complete audio segments directly from text descriptions. While these systems also facilitate music creation, the element of human creativity and deliberate expression is often limited. In contrast, the present work allows composers, arrangers, and performers to create the basic building blocks for music creation: audio of individual musical notes for use in electronic instruments and DAWs. Through text prompts, the user can specify the timbre characteristics of the audio. We introduce a system that combines a latent diffusion model and multi-modal contrastive learning to generate musical timbres conditioned on text descriptions. By jointly generating the magnitude and phase of the spectrogram, our method eliminates the need for subsequently running a phase retrieval algorithm, as related methods do.Audio examples, source code, and a web app are available atthis https URL

View on arXiv
@article{yuan2025_2504.09219,
  title={ Generation of Musical Timbres using a Text-Guided Diffusion Model },
  author={ Weixuan Yuan and Qadeer Khan and Vladimir Golkov },
  journal={arXiv preprint arXiv:2504.09219},
  year={ 2025 }
}
Comments on this paper