23
0

Input Snapshots Fusion for Scalable Discrete-Time Dynamic Graph Neural Networks

Abstract

In recent years, there has been a surge in research on dynamic graph representation learning, primarily focusing on modeling the evolution of temporal-spatial patterns in real-world applications. However, within the domain of discrete-time dynamic graphs, the exploration of temporal edges remains underexplored. Existing approaches often rely on additional sequential models to capture dynamics, leading to high computational and memory costs, particularly for large-scale graphs. To address this limitation, we propose the Input {\bf S}napshots {\bf F}usion based {\bf Dy}namic {\bf G}raph Neural Network (SFDyG), which combines Hawkes processes with graph neural networks to capture temporal and structural patterns in dynamic graphs effectively. By fusing multiple snapshots into a single temporal graph, SFDyG decouples computational complexity from the number of snapshots, enabling efficient full-batch and mini-batch training. Experimental evaluations on eight diverse dynamic graph datasets for future link prediction tasks demonstrate that SFDyG consistently outperforms existing methods.

View on arXiv
@article{qi2025_2405.06975,
  title={ Input Snapshots Fusion for Scalable Discrete-Time Dynamic Graph Neural Networks },
  author={ QingGuo Qi and Hongyang Chen and Minhao Cheng and Han Liu },
  journal={arXiv preprint arXiv:2405.06975},
  year={ 2025 }
}
Comments on this paper