5
0

Coherent Language Reconstruction from Brain Recordings with Flexible Multi-Modal Input Stimuli

Abstract

Decoding thoughts from brain activity offers valuable insights into human cognition and enables promising applications in brain-computer interaction. While prior studies have explored language reconstruction from fMRI data, they are typically limited to single-modality inputs such as images or audio. In contrast, human thought is inherently multimodal. To bridge this gap, we propose a unified and flexible framework for reconstructing coherent language from brain recordings elicited by diverse input modalities-visual, auditory, and textual. Our approach leverages visual-language models (VLMs), using modality-specific experts to jointly interpret information across modalities. Experiments demonstrate that our method achieves performance comparable to state-of-the-art systems while remaining adaptable and extensible. This work advances toward more ecologically valid and generalizable mind decoding.

View on arXiv
@article{ye2025_2505.10356,
  title={ Coherent Language Reconstruction from Brain Recordings with Flexible Multi-Modal Input Stimuli },
  author={ Chunyu Ye and Shaonan Wang },
  journal={arXiv preprint arXiv:2505.10356},
  year={ 2025 }
}
Comments on this paper