337

Large Language Model Can Transcribe Speech in Multi-Talker Scenarios with Versatile Instructions

IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2024
Shujie Hu
Jiawen Kang
Zhaoqing Li
Yuejiao Wang
Xixin Wu
Xunying Liu
Helen Meng
Main:4 Pages
2 Figures
Bibliography:1 Pages
Abstract

Recent advancements in large language models (LLMs) have revolutionized various domains, bringing significant progress and new opportunities. Despite progress in speech-related tasks, LLMs have not been sufficiently explored in multi-talker scenarios. In this work, we present a pioneering effort to investigate the capability of LLMs in transcribing speech in multi-talker environments, following versatile instructions related to multi-talker automatic speech recognition (ASR), target talker ASR, and ASR based on specific talker attributes such as sex, occurrence order, language, and keyword spoken. Our approach utilizes WavLM and Whisper encoder to extract multi-faceted speech representations that are sensitive to speaker characteristics and semantic context. These representations are then fed into an LLM fine-tuned using LoRA, enabling the capabilities for speech comprehension and transcription. Comprehensive experiments reveal the promising performance of our proposed system, MT-LLM, in cocktail party scenarios, highlighting the potential of LLM to handle speech-related tasks based on user instructions in such complex settings.

View on arXiv
Comments on this paper