ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.16991
20
0

An Effective Training Framework for Light-Weight Automatic Speech Recognition Models

22 May 2025
Abdul Hannan
Alessio Brutti
Shah Nawaz
Mubashir Noman
ArXivPDFHTML
Abstract

Recent advancement in deep learning encouraged developing large automatic speech recognition (ASR) models that achieve promising results while ignoring computational and memory constraints. However, deploying such models on low resource devices is impractical despite of their favorable performance. Existing approaches (pruning, distillation, layer skip etc.) transform the large models into smaller ones at the cost of significant performance degradation or require prolonged training of smaller models for better performance. To address these issues, we introduce an efficacious two-step representation learning based approach capable of producing several small sized models from a single large model ensuring considerably better performance in limited number of epochs. Comprehensive experimentation on ASR benchmarks reveals the efficacy of our approach, achieving three-fold training speed-up and up to 12.54% word error rate improvement.

View on arXiv
@article{hannan2025_2505.16991,
  title={ An Effective Training Framework for Light-Weight Automatic Speech Recognition Models },
  author={ Abdul Hannan and Alessio Brutti and Shah Nawaz and Mubashir Noman },
  journal={arXiv preprint arXiv:2505.16991},
  year={ 2025 }
}
Comments on this paper