202

Static Batching of Irregular Workloads on GPUs: Framework and Application to Efficient MoE Model Inference

Abstract

It has long been a problem to arrange and execute irregular workloads on massively parallel devices. We propose a general framework for statically batching irregular workloads into a single kernel with a runtime task mapping mechanism on GPUs. We further apply this framework to Mixture-of-Experts (MoE) model inference and implement an optimized and efficient CUDA kernel. Our MoE kernel achieves up to 91% of the peak Tensor Core throughput on NVIDIA H800 GPU and 95% on NVIDIA H20 GPU.

View on arXiv
Main:8 Pages
Bibliography:3 Pages
1 Tables
Comments on this paper