51
1

Biologically Plausible Brain Graph Transformer

Abstract

State-of-the-art brain graph analysis methods fail to fully encode the small-world architecture of brain graphs (accompanied by the presence of hubs and functional modules), and therefore lack biological plausibility to some extent. This limitation hinders their ability to accurately represent the brain's structural and functional properties, thereby restricting the effectiveness of machine learning models in tasks such as brain disorder detection. In this work, we propose a novel Biologically Plausible Brain Graph Transformer (BioBGT) that encodes the small-world architecture inherent in brain graphs. Specifically, we present a network entanglement-based node importance encoding technique that captures the structural importance of nodes in global information propagation during brain graph communication, highlighting the biological properties of the brain structure. Furthermore, we introduce a functional module-aware self-attention to preserve the functional segregation and integration characteristics of brain graphs in the learned representations. Experimental results on three benchmark datasets demonstrate that BioBGT outperforms state-of-the-art models, enhancing biologically plausible brain graph representations for various brain graph analytical tasks

View on arXiv
@article{peng2025_2502.08958,
  title={ Biologically Plausible Brain Graph Transformer },
  author={ Ciyuan Peng and Yuelong Huang and Qichao Dong and Shuo Yu and Feng Xia and Chengqi Zhang and Yaochu Jin },
  journal={arXiv preprint arXiv:2502.08958},
  year={ 2025 }
}
Comments on this paper