ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.18453
52
0

MPE-TTS: Customized Emotion Zero-Shot Text-To-Speech Using Multi-Modal Prompt

24 May 2025
Zhichao Wu
Yueteng Kang
Songjun Cao
Long Ma
Qiulin Li
Qun Yang
    DiffM
ArXiv (abs)PDFHTML
Main:4 Pages
2 Figures
Bibliography:1 Pages
3 Tables
Abstract

Most existing Zero-Shot Text-To-Speech(ZS-TTS) systems generate the unseen speech based on single prompt, such as reference speech or text descriptions, which limits their flexibility. We propose a customized emotion ZS-TTS system based on multi-modal prompt. The system disentangles speech into the content, timbre, emotion and prosody, allowing emotion prompts to be provided as text, image or speech. To extract emotion information from different prompts, we propose a multi-modal prompt emotion encoder. Additionally, we introduce an prosody predictor to fit the distribution of prosody and propose an emotion consistency loss to preserve emotion information in the predicted prosody. A diffusion-based acoustic model is employed to generate the target mel-spectrogram. Both objective and subjective experiments demonstrate that our system outperforms existing systems in terms of naturalness and similarity. The samples are available atthis https URL.

View on arXiv
@article{wu2025_2505.18453,
  title={ MPE-TTS: Customized Emotion Zero-Shot Text-To-Speech Using Multi-Modal Prompt },
  author={ Zhichao Wu and Yueteng Kang and Songjun Cao and Long Ma and Qiulin Li and Qun Yang },
  journal={arXiv preprint arXiv:2505.18453},
  year={ 2025 }
}
Comments on this paper