This work introduces a novel approach to fMRI-based visual image reconstruction using a subject-agnostic common representation space. We show that the brain signals of the subjects can be aligned in this common space during training to form a semantically aligned common brain. This is leveraged to demonstrate that aligning subject-specific lightweight modules to a reference subject is significantly more efficient than traditional end-to-end training methods. Our approach excels in low-data scenarios. We evaluate our methods on different datasets, demonstrating that the common space is subject and dataset-agnostic.
View on arXiv@article{zangos2025_2505.01670, title={ Efficient Multi Subject Visual Reconstruction from fMRI Using Aligned Representations }, author={ Christos Zangos and Danish Ebadulla and Thomas Christopher Sprague and Ambuj Singh }, journal={arXiv preprint arXiv:2505.01670}, year={ 2025 } }