43
0

CR-CTC: Consistency regularization on CTC for improved speech recognition

Abstract

Connectionist Temporal Classification (CTC) is a widely used method for automatic speech recognition (ASR), renowned for its simplicity and computational efficiency. However, it often falls short in recognition performance. In this work, we propose the Consistency-Regularized CTC (CR-CTC), which enforces consistency between two CTC distributions obtained from different augmented views of the input speech mel-spectrogram. We provide in-depth insights into its essential behaviors from three perspectives: 1) it conducts self-distillation between random pairs of sub-models that process different augmented views; 2) it learns contextual representation through masked prediction for positions within time-masked regions, especially when we increase the amount of time masking; 3) it suppresses the extremely peaky CTC distributions, thereby reducing overfitting and improving the generalization ability. Extensive experiments on LibriSpeech, Aishell-1, and GigaSpeech datasets demonstrate the effectiveness of our CR-CTC. It significantly improves the CTC performance, achieving state-of-the-art results comparable to those attained by transducer or systems combining CTC and attention-based encoder-decoder (CTC/AED). We release our code atthis https URL.

View on arXiv
@article{yao2025_2410.05101,
  title={ CR-CTC: Consistency regularization on CTC for improved speech recognition },
  author={ Zengwei Yao and Wei Kang and Xiaoyu Yang and Fangjun Kuang and Liyong Guo and Han Zhu and Zengrui Jin and Zhaoqing Li and Long Lin and Daniel Povey },
  journal={arXiv preprint arXiv:2410.05101},
  year={ 2025 }
}
Comments on this paper