27
0

LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation

Abstract

Modern automatic speech recognition (ASR) models, such as OpenAI's Whisper, rely on deep encoder-decoder architectures, and their encoders are a critical bottleneck for efficient deployment due to high computational intensity. We introduce LiteASR, a low-rank compression scheme for ASR encoders that significantly reduces inference costs while maintaining transcription accuracy. Our approach leverages the strong low-rank properties observed in intermediate activations: by applying principal component analysis (PCA) with a small calibration dataset, we approximate linear transformations with a chain of low-rank matrix multiplications, and further optimize self-attention to work in the reduced dimension. Evaluation results show that our method can compress Whisper large-v3's encoder size by over 50%, matching Whisper medium's size with better transcription accuracy, thereby establishing a new Pareto-optimal frontier of efficiency and performance. The code of LiteASR is available atthis https URL.

View on arXiv
@article{kamahori2025_2502.20583,
  title={ LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation },
  author={ Keisuke Kamahori and Jungo Kasai and Noriyuki Kojima and Baris Kasikci },
  journal={arXiv preprint arXiv:2502.20583},
  year={ 2025 }
}
Comments on this paper