369
v1v2v3v4 (latest)

NEST: Self-supervised Fast Conformer as All-purpose Seasoning to Speech Processing Tasks

IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2024
Taejin Park
Kunal Dhawan
Jagadeesh Balam
Boris Ginsburg
Main:4 Pages
2 Figures
Bibliography:2 Pages
Abstract

Self-supervised learning has been proved to benefit a wide range of speech processing tasks, such as speech recognition/translation, speaker verification and diarization, etc. However, most of current approaches are computationally expensive. In this paper, we propose a simplified and more efficient self-supervised learning framework termed as NeMo Encoder for Speech Tasks (NEST). Specifically, we adopt the FastConformer architecture with 8x sub-sampling rate, which is faster than Transformer or Conformer architectures. Instead of clustering-based quantization, we use fixed random projection for its simplicity and effectiveness. We also implement a generalized noisy speech augmentation that teaches the model to disentangle the main speaker from noise or other speakers. Experiments show that \model improves over existing self-supervised models and achieves new state-of-the-art performance on a variety of speech processing tasks, such as speech recognition/translation, speaker diarization, spoken language understanding, etc. Code and checkpoints will be publicly available via NVIDIA NeMo framework.

View on arXiv
Comments on this paper