ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.14898
  4. Cited By
HongTu: Scalable Full-Graph GNN Training on Multiple GPUs (via
  communication-optimized CPU data offloading)

HongTu: Scalable Full-Graph GNN Training on Multiple GPUs (via communication-optimized CPU data offloading)

25 November 2023
Qiange Wang
Yao Chen
Weng-Fai Wong
Bingsheng He
    GNN
ArXiv (abs)PDFHTML

Papers citing "HongTu: Scalable Full-Graph GNN Training on Multiple GPUs (via communication-optimized CPU data offloading)"

2 / 2 papers shown
Graph Neural Network Training Systems: A Performance Comparison of
  Full-Graph and Mini-Batch
Graph Neural Network Training Systems: A Performance Comparison of Full-Graph and Mini-Batch
Saurabh Bajaj
Hui Guan
Marco Serafini
GNN
287
10
0
01 Jun 2024
GSplit: Scaling Graph Neural Network Training on Large Graphs via Split-Parallelism
GSplit: Scaling Graph Neural Network Training on Large Graphs via Split-Parallelism
Sandeep Polisetty
Juelin Liu
Kobi Falus
Yi R. Fung
Seung-Hwan Lim
Hui Guan
Marco Serafini
GNN
397
12
0
24 Mar 2023
1