ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.09697
18
0

Heta: Distributed Training of Heterogeneous Graph Neural Networks

19 August 2024
Yuchen Zhong
Junwei Su
Chuan Wu
Minjie Wang
ArXivPDFHTML
Abstract

Heterogeneous Graph Neural Networks (HGNNs) leverage diverse semantic relationships in Heterogeneous Graphs (HetGs) and have demonstrated remarkable learning performance in various applications. However, current distributed GNN training systems often overlook unique characteristics of HetGs, such as varying feature dimensions and the prevalence of missing features among nodes, leading to suboptimal performance or even incompatibility with distributed HGNN training. We introduce Heta, a framework designed to address the communication bottleneck in distributed HGNN training. Heta leverages the inherent structure of HGNNs - independent relation-specific aggregations for each relation, followed by a cross-relation aggregation - and advocates for a novel Relation-Aggregation-First computation paradigm. It performs relation-specific aggregations within graph partitions and then exchanges partial aggregations. This design, coupled with a new graph partitioning method that divides a HetG based on its graph schema and HGNN computation dependency, substantially reduces communication overhead. Heta further incorporates an innovative GPU feature caching strategy that accounts for the different cache miss-penalties associated with diverse node types. Comprehensive evaluations of various HGNN models and large heterogeneous graph datasets demonstrate that Heta outperforms state-of-the-art systems like DGL and GraphLearn by up to 5.8x and 2.3x in end-to-end epoch time, respectively.

View on arXiv
Comments on this paper