25
3

MM-GTUNets: Unified Multi-Modal Graph Deep Learning for Brain Disorders Prediction

Luhui Cai
Weiming Zeng
Hongyu Chen
Hua Zhang
Yueyang Li
Hongjie Yan
Lingbin Bian
Lingbin Bian
Wai Ting Siok
Nizhuan Wang
Abstract

Graph deep learning (GDL) has demonstrated impressive performance in predicting population-based brain disorders (BDs) through the integration of both imaging and non-imaging data. However, the effectiveness of GDL based methods heavily depends on the quality of modeling the multi-modal population graphs and tends to degrade as the graph scale increases. Furthermore, these methods often constrain interactions between imaging and non-imaging data to node-edge interactions within the graph, overlooking complex inter-modal correlations, leading to suboptimal outcomes. To overcome these challenges, we propose MM-GTUNets, an end-to-end graph transformer based multi-modal graph deep learning (MMGDL) framework designed for brain disorders prediction at large scale. Specifically, to effectively leverage rich multi-modal information related to diseases, we introduce Modality Reward Representation Learning (MRRL) which adaptively constructs population graphs using a reward system. Additionally, we employ variational autoencoder to reconstruct latent representations of non-imaging features aligned with imaging features. Based on this, we propose Adaptive Cross-Modal Graph Learning (ACMGL), which captures critical modality-specific and modality-shared features through a unified GTUNet encoder taking advantages of Graph UNet and Graph Transformer, and feature fusion module. We validated our method on two public multi-modal datasets ABIDE and ADHD-200, demonstrating its superior performance in diagnosing BDs. Our code is available atthis https URL.

View on arXiv
@article{cai2025_2406.14455,
  title={ MM-GTUNets: Unified Multi-Modal Graph Deep Learning for Brain Disorders Prediction },
  author={ Luhui Cai and Weiming Zeng and Hongyu Chen and Hua Zhang and Yueyang Li and Yu Feng and Hongjie Yan and Lingbin Bian and Wai Ting Siok and Nizhuan Wang },
  journal={arXiv preprint arXiv:2406.14455},
  year={ 2025 }
}
Comments on this paper