38
0

MindLLM: A Subject-Agnostic and Versatile Model for fMRI-to-Text Decoding

Abstract

Decoding functional magnetic resonance imaging (fMRI) signals into text has been a key challenge in the neuroscience community, with the potential to advance brain-computer interfaces and uncover deeper insights into brain mechanisms. However, existing approaches often struggle with suboptimal predictive performance, limited task variety, and poor generalization across subjects. In response to this, we propose MindLLM, a model designed for subject-agnostic and versatile fMRI-to-text decoding. MindLLM consists of an fMRI encoder and an off-the-shelf LLM. The fMRI encoder employs a neuroscience-informed attention mechanism, which is capable of accommodating subjects with varying input shapes and thus achieves high-performance subject-agnostic decoding. Moreover, we introduce Brain Instruction Tuning (BIT), a novel approach that enhances the model's ability to capture diverse semantic representations from fMRI signals, facilitating more versatile decoding. We evaluate MindLLM on comprehensive fMRI-to-text benchmarks. Results demonstrate that our model outperforms the baselines, improving downstream tasks by 12.0%, unseen subject generalization by 16.4%, and novel task adaptation by 25.0%. Furthermore, the attention patterns in MindLLM provide interpretable insights into its decision-making process.

View on arXiv
@article{qiu2025_2502.15786,
  title={ MindLLM: A Subject-Agnostic and Versatile Model for fMRI-to-Text Decoding },
  author={ Weikang Qiu and Zheng Huang and Haoyu Hu and Aosong Feng and Yujun Yan and Rex Ying },
  journal={arXiv preprint arXiv:2502.15786},
  year={ 2025 }
}
Comments on this paper