31
0

Between Linear and Sinusoidal: Rethinking the Time Encoder in Dynamic Graph Learning

Abstract

Dynamic graph learning is essential for applications involving temporal networks and requires effective modeling of temporal relationships. Seminal attention-based models like TGAT and DyGFormer rely on sinusoidal time encoders to capture temporal relationships between edge events. In this paper, we study a simpler alternative: the linear time encoder, which avoids temporal information loss caused by sinusoidal functions and reduces the need for high dimensional time encoders. We show that the self-attention mechanism can effectively learn to compute time spans from linear time encodings and extract relevant temporal patterns. Through extensive experiments on six dynamic graph datasets, we demonstrate that the linear time encoder improves the performance of TGAT and DyGFormer in most cases. Moreover, the linear time encoder can lead to significant savings in model parameters with minimal performance loss. For example, compared to a 100-dimensional sinusoidal time encoder, TGAT with a 2-dimensional linear time encoder saves 43% of parameters and achieves higher average precision on five datasets. These results can be readily used to positively impact the design choices of a wide variety of dynamic graph learning architectures. The experimental code is available at:this https URL.

View on arXiv
@article{chung2025_2504.08129,
  title={ Between Linear and Sinusoidal: Rethinking the Time Encoder in Dynamic Graph Learning },
  author={ Hsing-Huan Chung and Shravan Chaudhari and Xing Han and Yoav Wald and Suchi Saria and Joydeep Ghosh },
  journal={arXiv preprint arXiv:2504.08129},
  year={ 2025 }
}
Comments on this paper