Covert speech involves imagining speaking without audible sound or any movements. Decoding covert speech from electroencephalogram (EEG) is challenging due to a limited understanding of neural pronunciation mapping and the low signal-to-noise ratio of the signal. In this study, we developed a large-scale multi-utterance speech EEG dataset from 57 right-handed native English-speaking subjects, each performing covert and overt speech tasks by repeating the same word in five utterances within a ten-second duration. Given the spatio-temporal nature of the neural activation process during speech pronunciation, we developed a Functional Areas Spatio-temporal Transformer (FAST), an effective framework for converting EEG signals into tokens and utilizing transformer architecture for sequence encoding. Our results reveal distinct and interpretable speech neural features by the visualization of FAST-generated activation maps across frontal and temporal brain regions with each word being covertly spoken, providing new insights into the discriminative features of the neural representation of covert speech. This is the first report of such a study, which provides interpretable evidence for speech decoding from EEG. The code for this work has been made public atthis https URL
View on arXiv@article{jiang2025_2504.03762, title={ Decoding Covert Speech from EEG Using a Functional Areas Spatio-Temporal Transformer }, author={ Muyun Jiang and Yi Ding and Wei Zhang and Kok Ann Colin Teo and LaiGuan Fong and Shuailei Zhang and Zhiwei Guo and Chenyu Liu and Raghavan Bhuvanakantham and Wei Khang Jeremy Sim and Chuan Huat Vince Foo and Rong Hui Jonathan Chua and Parasuraman Padmanabhan and Victoria Leong and Jia Lu and Balazs Gulyas and Cuntai Guan }, journal={arXiv preprint arXiv:2504.03762}, year={ 2025 } }