ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.07244
30
5

OpSparse: a Highly Optimized Framework for Sparse General Matrix Multiplication on GPUs

15 June 2022
Zhaoyang Du
Yijin Guan
Tianchan Guan
Dimin Niu
Linyong Huang
Hongzhong Zheng
Yuan Xie
ArXivPDFHTML
Abstract

Sparse general matrix multiplication (SpGEMM) is an important and expensive computation primitive in many real-world applications. Due to SpGEMM's inherent irregularity and the vast diversity of its input matrices, developing high-performance SpGEMM implementation on modern processors such as GPUs is challenging. The state-of-the-art SpGEMM libraries (i.e., nsparsensparsensparse and spECKspECKspECK) adopt several algorithms to tackle the challenges of global load balance, local load balance, and allocation of the result matrix. While these libraries focus on the high-level algorithm design for SpGEMM, they neglect several low-level architecture-specific optimizations, which causes inefficient implementations in their libraries. In this paper, we classify their inefficient implementations into seven categories. Based on our observations, we propose a highly optimized SpGEMM library called OpSparseOpSparseOpSparse. The optimizations in OpSparseOpSparseOpSparse include 1) optimizing the binning method by improving the utilization of the shared memory, 2) optimizing the hashing method by reducing the access to the hash table, 3) improving the trade-off between hash collision rate and hardware utilization in the hashing method by setting appropriate binning ranges, 4) reducing the overheads of global memory utilization by minimizing the global memory usage of the metadata, and 5) improving the execution parallelism by overlapping global memory allocation with kernel execution. Performance evaluations with 26 commonly used matrices on an Nvidia Tesla V100 GPU show that OpSparseOpSparseOpSparse achieves up to 27.8×27.8\times27.8×, 1.81×1.81\times1.81×, and 2.04×2.04\times2.04× performance speedup over three state-of-the-art libraries: cuSPARSEcuSPARSEcuSPARSE, nsparsensparsensparse, and spECKspECKspECK, respectively.

View on arXiv
Comments on this paper