ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.18368
70
0

AMPS: ASR with Multimodal Paraphrase Supervision

27 November 2024
Amruta Parulekar
Abhishek Gupta
Sameep Chattopadhyay
P. Jyothi
ArXivPDFHTML
Abstract

Spontaneous or conversational multilingual speech presents many challenges for state-of-the-art automatic speech recognition (ASR) systems. In this work, we present a new technique AMPS that augments a multilingual multimodal ASR system with paraphrase-based supervision for improved conversational ASR in multiple languages, including Hindi, Marathi, Malayalam, Kannada, and Nyanja. We use paraphrases of the reference transcriptions as additional supervision while training the multimodal ASR model and selectively invoke this paraphrase objective for utterances with poor ASR performance. Using AMPS with a state-of-the-art multimodal model SeamlessM4T, we obtain significant relative reductions in word error rates (WERs) of up to 5%. We present detailed analyses of our system using both objective and human evaluation metrics.

View on arXiv
@article{gupta2025_2411.18368,
  title={ AMPS: ASR with Multimodal Paraphrase Supervision },
  author={ Abhishek Gupta and Amruta Parulekar and Sameep Chattopadhyay and Preethi Jyothi },
  journal={arXiv preprint arXiv:2411.18368},
  year={ 2025 }
}
Comments on this paper