ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.00391
23
15

PiPAD: Pipelined and Parallel Dynamic GNN Training on GPUs

1 January 2023
Chunyang Wang
Desen Sun
Yunru Bai
    GNN
    AI4CE
ArXivPDFHTML
Abstract

Dynamic Graph Neural Networks (DGNNs) have been broadly applied in various real-life applications, such as link prediction and pandemic forecast, to capture both static structural information and temporal characteristics from dynamic graphs. Combining both time-dependent and -independent components, DGNNs manifest substantial parallel computation and data reuse potentials, but suffer from severe memory access inefficiency and data transfer overhead under the canonical one-graph-at-a-time training pattern. To tackle the challenges, we propose PiPAD, a Pi‾pelined\underline{\textbf{Pi}}pelinedPi​pelined and PA‾rallel\underline{\textbf{PA}}rallelPA​rallel D‾GNN\underline{\textbf{D}}GNND​GNN training framework for the end-to-end performance optimization on GPUs. From both the algorithm and runtime level, PiPAD holistically reconstructs the overall training paradigm from the data organization to computation manner. Capable of processing multiple graph snapshots in parallel, PiPAD eliminates the unnecessary data transmission and alleviates memory access inefficiency to improve the overall performance. Our evaluation across various datasets shows PiPAD achieves 1.22×1.22\times1.22×-9.57×9.57\times9.57× speedup over the state-of-the-art DGNN frameworks on three representative models.

View on arXiv
Comments on this paper