108

Analyzing GPU Tensor Core Potential for Fast Reductions

Abstract

The Nvidia GPU architecture has introduced new computing elements such as the \textit{tensor cores}, which are special processing units dedicated to perform fast matrix-multiply-accumulate (MMA) operations and accelerate \textit{Deep Learning} applications. In this work we present the idea of using tensor cores for a different purpose such as the parallel arithmetic reduction problem, and propose a new GPU tensor-core based algorithm as well as analyze its potential performance benefits in comparison to a traditional GPU-based one. The proposed method, encodes the reduction of nn numbers as a set of m×mm\times m MMA tensor-core operations (for Nvidia's Volta architecture m=16m=16) and takes advantage from the fact that each MMA operation takes just one GPU cycle. When analyzing the cost under a simplified GPU computing model, the result is that the new algorithm manages to reduce a problem of nn numbers in T(n)=5logm2(n)T(n) = 5\log_{m^2}(n) steps with a speedup of S=45log2(m2)S = \frac{4}{5}\log_2(m^2).

View on arXiv
Comments on this paper