Towards Efficient Training of Graph Neural Networks: A Multiscale Approach

Graph Neural Networks (GNNs) have emerged as a powerful tool for learning and inferring from graph-structured data, and are widely used in a variety of applications, often considering large amounts of data and large graphs. However, training on such data requires large memory and extensive computations. In this paper, we introduce a novel framework for efficient multiscale training of GNNs, designed to integrate information across multiscale representations of a graph. Our approach leverages a hierarchical graph representation, taking advantage of coarse graph scales in the training process, where each coarse scale graph has fewer nodes and edges. Based on this approach, we propose a suite of GNN training methods: such as coarse-to-fine, sub-to-full, and multiscale gradient computation. We demonstrate the effectiveness of our methods on various datasets and learning tasks.
View on arXiv@article{gal2025_2503.19666, title={ Towards Efficient Training of Graph Neural Networks: A Multiscale Approach }, author={ Eshed Gal and Moshe Eliasof and Carola-Bibiane Schönlieb and Eldad Haber and Eran Treister }, journal={arXiv preprint arXiv:2503.19666}, year={ 2025 } }