ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.08577
  4. Cited By
AffectEcho: Speaker Independent and Language-Agnostic Emotion and Affect
  Transfer for Speech Synthesis

AffectEcho: Speaker Independent and Language-Agnostic Emotion and Affect Transfer for Speech Synthesis

16 August 2023
Hrishikesh Viswanath
Aneesh Bhattacharya
Pascal Jutras-Dubé
Prerit Gupta
Mridu Prashanth
Yashvardhan Khaitan
Aniket Bera
ArXivPDFHTML

Papers citing "AffectEcho: Speaker Independent and Language-Agnostic Emotion and Affect Transfer for Speech Synthesis"

1 / 1 papers shown
Title
Nonparallel Emotional Voice Conversion For Unseen Speaker-Emotion Pairs
  Using Dual Domain Adversarial Network & Virtual Domain Pairing
Nonparallel Emotional Voice Conversion For Unseen Speaker-Emotion Pairs Using Dual Domain Adversarial Network & Virtual Domain Pairing
Nirmesh J. Shah
M. Singh
Naoya Takahashi
N. Onoe
34
13
0
21 Feb 2023
1