Speech disorders such as dysarthria and anarthria can severely impair the patient's ability to communicate verbally. Speech decoding brain-computer interfaces (BCIs) offer a potential alternative by directly translating speech intentions into spoken words, serving as speech neuroprostheses. This paper reports an experimental protocol for Mandarin Chinese speech decoding BCIs, along with the corresponding decoding algorithms. Stereo-electroencephalography (SEEG) and synchronized audio data were collected from eight drug-resistant epilepsy patients as they conducted a word-level reading task. The proposed SEEG and Audio Contrastive Matching (SACM), a contrastive learning-based framework, achieved decoding accuracies significantly exceeding chance levels in both speech detection and speech decoding tasks. Electrode-wise analysis revealed that a single sensorimotor cortex electrode achieved performance comparable to that of the full electrode array. These findings provide valuable insights for developing more accurate online speech decoding BCIs.
View on arXiv@article{wang2025_2505.19652, title={ SACM: SEEG-Audio Contrastive Matching for Chinese Speech Decoding }, author={ Hongbin Wang and Zhihong Jia and Yuanzhong Shen and Ziwei Wang and Siyang Li and Kai Shu and Feng Hu and Dongrui Wu }, journal={arXiv preprint arXiv:2505.19652}, year={ 2025 } }