17
1

CATD: Unified Representation Learning for EEG-to-fMRI Cross-Modal Generation

Weiheng Yao
Shuqiang Wang
Mufti Mahmud
Ning Zhong
Baiying Lei
Shuqiang Wang
Abstract

Multi-modal neuroimaging analysis is crucial for a comprehensive understanding of brain function and pathology, as it allows for the integration of different imaging techniques, thus overcoming the limitations of individual modalities. However, the high costs and limited availability of certain modalities pose significant challenges. To address these issues, this paper proposes the Condition-Aligned Temporal Diffusion (CATD) framework for end-to-end cross-modal synthesis of neuroimaging, enabling the generation of functional magnetic resonance imaging (fMRI)-detected Blood Oxygen Level Dependent (BOLD) signals from more accessible Electroencephalography (EEG) signals. By constructing Conditionally Aligned Block (CAB), heterogeneous neuroimages are aligned into a latent space, achieving a unified representation that provides the foundation for cross-modal transformation in neuroimaging. The combination with the constructed Dynamic Time-Frequency Segmentation (DTFS) module also enables the use of EEG signals to improve the temporal resolution of BOLD signals, thus augmenting the capture of the dynamic details of the brain. Experimental validation demonstrates that the framework improves the accuracy of brain activity state prediction by 9.13% (reaching 69.8%), enhances the diagnostic accuracy of brain disorders by 4.10% (reaching 99.55%), effectively identifies abnormal brain regions, enhancing the temporal resolution of BOLD signals. The proposed framework establishes a new paradigm for cross-modal synthesis of neuroimaging by unifying heterogeneous neuroimaging data into a latent representation space, showing promise in medical applications such as improving Parkinson's disease prediction and identifying abnormal brain regions.

View on arXiv
@article{yao2025_2408.00777,
  title={ CATD: Unified Representation Learning for EEG-to-fMRI Cross-Modal Generation },
  author={ Weiheng Yao and Zhihan Lyu and Mufti Mahmud and Ning Zhong and Baiying Lei and Shuqiang Wang },
  journal={arXiv preprint arXiv:2408.00777},
  year={ 2025 }
}
Comments on this paper