ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.18913
41
0

CS-Dialogue: A 104-Hour Dataset of Spontaneous Mandarin-English Code-Switching Dialogues for Speech Recognition

26 February 2025
Jiaming Zhou
Yujie Guo
S. Zhao
Haoqin Sun
Hui Wang
Jiabei He
Aobo Kong
Shiyao Wang
Xi Yang
Y. Wang
Yonghua Lin
Yong Qin
ArXivPDFHTML
Abstract

Code-switching (CS), the alternation between two or more languages within a single conversation, presents significant challenges for automatic speech recognition (ASR) systems. Existing Mandarin-English code-switching datasets often suffer from limitations in size, spontaneity, and the lack of full-length dialogue recordings with transcriptions, hindering the development of robust ASR models for real-world conversational scenarios. This paper introduces CS-Dialogue, a novel large-scale Mandarin-English code-switching speech dataset comprising 104 hours of spontaneous conversations from 200 speakers. Unlike previous datasets, CS-Dialogue provides full-length dialogue recordings with complete transcriptions, capturing naturalistic code-switching patterns in continuous speech. We describe the data collection and annotation processes, present detailed statistics of the dataset, and establish benchmark ASR performance using state-of-the-art models. Our experiments, using Transformer, Conformer, and Branchformer, demonstrate the challenges of code-switching ASR, and show that existing pre-trained models such as Whisper still have the space to improve. The CS-Dialogue dataset will be made freely available for all academic purposes.

View on arXiv
@article{zhou2025_2502.18913,
  title={ CS-Dialogue: A 104-Hour Dataset of Spontaneous Mandarin-English Code-Switching Dialogues for Speech Recognition },
  author={ Jiaming Zhou and Yujie Guo and Shiwan Zhao and Haoqin Sun and Hui Wang and Jiabei He and Aobo Kong and Shiyao Wang and Xi Yang and Yequan Wang and Yonghua Lin and Yong Qin },
  journal={arXiv preprint arXiv:2502.18913},
  year={ 2025 }
}
Comments on this paper