17
12

Low-latency Mini-batch GNN Inference on CPU-FPGA Heterogeneous Platform

Abstract

Mini-batch inference of Graph Neural Networks (GNNs) is a key problem in many real-world applications. Recently, a GNN design principle of model depth-receptive field decoupling has been proposed to address the well-known issue of neighborhood explosion. Decoupled GNN models achieve higher accuracy than original models and demonstrate excellent scalability for mini-batch inference. We map Decoupled GNNs onto CPU-FPGA heterogeneous platforms to achieve low-latency mini-batch inference. On the FPGA platform, we design a novel GNN hardware accelerator with an adaptive datapath denoted Adaptive Computation Kernel (ACK) that can execute various computation kernels of GNNs with low-latency: (1) for dense computation kernels expressed as matrix multiplication, ACK works as a systolic array with fully localized connections, (2) for sparse computation kernels, ACK follows the scatter-gather paradigm and works as multiple parallel pipelines to support the irregular connectivity of graphs. The proposed task scheduling hides the CPU-FPGA data communication overhead to reduce the inference latency. We develop a fast design space exploration algorithm to generate a single accelerator for multiple target GNN models. We implement our accelerator on a state-of-the-art CPU-FPGA platform and evaluate the performance using three representative models (GCN, GraphSAGE, and GAT). Results show that our CPU-FPGA implementation achieves 21.450.8×21.4-50.8\times, 2.921.6×2.9-21.6\times, 4.7×4.7\times latency reduction compared with state-of-the-art implementations on CPU-only, CPU-GPU and CPU-FPGA platforms.

View on arXiv
Comments on this paper