KAGNNs: Kolmogorov-Arnold Networks meet Graph Learning

In recent years, Graph Neural Networks (GNNs) have become the de facto tool for learning node and graph representations. Most GNNs typically consist of a sequence of neighborhood aggregation (a.k.a., message-passing) layers, within which the representation of each node is updated based on those of its neighbors. The most expressive message-passing GNNs can be obtained through the use of the sum aggregator and of MLPs for feature transformation, thanks to their universal approximation capabilities. However, the limitations of MLPs recently motivated the introduction of another family of universal approximators, called Kolmogorov-Arnold Networks (KANs) which rely on a different representation theorem. In this work, we compare the performance of KANs against that of MLPs on graph learning tasks. We implement three new KAN-based GNN layers, inspired respectively by the GCN, GAT and GIN layers. We evaluate two different implementations of KANs using two distinct base families of functions, namely B-splines and radial basis functions. We perform extensive experiments on node classification, link prediction, graph classification and graph regression datasets. Our results indicate that KANs are on-par with or better than MLPs on all tasks studied in this paper. We also show that the size and training speed of RBF-based KANs is only marginally higher than for MLPs, making them viable alternatives. Code available atthis https URL.
View on arXiv@article{bresson2025_2406.18380, title={ KAGNNs: Kolmogorov-Arnold Networks meet Graph Learning }, author={ Roman Bresson and Giannis Nikolentzos and George Panagopoulos and Michail Chatzianastasis and Jun Pang and Michalis Vazirgiannis }, journal={arXiv preprint arXiv:2406.18380}, year={ 2025 } }