ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.14190
21
0

AsyncSwitch: Asynchronous Text-Speech Adaptation for Code-Switched ASR

17 June 2025
Tuan Nguyen
Huy-Dat Tran
ArXiv (abs)PDFHTML
Main:5 Pages
1 Figures
Bibliography:2 Pages
Abstract

Developing code-switched ASR systems is challenging due to language ambiguity and limited exposure to multilingual, code-switched data, while collecting such speech is costly. Prior work generates synthetic audio from text, but these methods are computationally intensive and hard to scale. We introduce AsyncSwitch, a novel asynchronous adaptation framework that leverages large-scale, text-rich web data to pre-expose ASR models to diverse code-switched domains before fine-tuning on paired speech-text corpora. Our three-stage process (1) trains decoder self-attention and feedforward layers on code-switched text, (2) aligns decoder and encoder via cross-attention using limited speech-text data, and (3) fully fine-tunes the entire model. Experiments with Whisper on Malay-English code-switching demonstrate a 9.02% relative WER reduction, while improving monolingual performance in Singlish, Malay, and other English variants.

View on arXiv
@article{nguyen2025_2506.14190,
  title={ AsyncSwitch: Asynchronous Text-Speech Adaptation for Code-Switched ASR },
  author={ Tuan Nguyen and Huy-Dat Tran },
  journal={arXiv preprint arXiv:2506.14190},
  year={ 2025 }
}
Comments on this paper