226

Bilevel Joint Unsupervised and Supervised Training for Automatic Speech Recognition

IEEE Transactions on Audio, Speech, and Language Processing (TASLP), 2024
Main:7 Pages
4 Figures
Bibliography:2 Pages
7 Tables
Appendix:1 Pages
Abstract

In this paper, we propose a bilevel joint unsupervised and supervised training (BL-JUST) framework for automatic speech recognition. Compared to the conventional pre-training and fine-tuning strategy which is a disconnected two-stage process, BL-JUST tries to optimize an acoustic model such that it simultaneously minimizes both the unsupervised and supervised loss functions. Because BL-JUST seeks matched local optima of both loss functions, acoustic representations learned by the acoustic model strike a good balance between being generic and task-specific. We solve the BL-JUST problem using penalty-based bilevel gradient descent and evaluate the trained deep neural network acoustic models on various datasets with a variety of architectures and loss functions. We show that BL-JUST can outperform the widely-used pre-training and fine-tuning strategy and some other popular semi-supervised techniques.

View on arXiv
Comments on this paper