Advancing Arabic Speech Recognition Through Large-Scale Weakly Supervised Learning

Automatic speech recognition (ASR) is crucial for human-machine interaction in diverse applications like conversational agents, industrial robotics, call center automation, and automated subtitling. However, developing high-performance ASR models remains challenging, particularly for low-resource languages like Arabic, due to the scarcity of large, labeled speech datasets, which are costly and labor-intensive to produce. In this work, we employ weakly supervised learning to train an Arabic ASR model using the Conformer architecture. Our model is trained from scratch on 15,000 hours of weakly annotated speech data covering both Modern Standard Arabic (MSA) and Dialectal Arabic (DA), eliminating the need for costly manual transcriptions. Despite the absence of human-verified labels, our approach achieves state-of-the-art (SOTA) results in Arabic ASR, surpassing both open and closed-source models on standard benchmarks. By demonstrating the effectiveness of weak supervision as a scalable, cost-efficient alternative to traditional supervised approaches, paving the way for improved ASR systems in low resource settings.
View on arXiv@article{salhab2025_2504.12254, title={ Advancing Arabic Speech Recognition Through Large-Scale Weakly Supervised Learning }, author={ Mahmoud Salhab and Marwan Elghitany and Shameed Sait and Syed Sibghat Ullah and Mohammad Abusheikh and Hasan Abusheikh }, journal={arXiv preprint arXiv:2504.12254}, year={ 2025 } }