131
v1v2 (latest)

FPGA Co-Design for Efficient N:M Sparse and Quantized Model Inference

Fen-Yu Hsieh
Yun-Chang Teng
Ding-Yong Hong
Jan-Jan Wu
Main:9 Pages
5 Figures
Bibliography:2 Pages
9 Tables
Abstract

Large language models (LLMs) have demonstrated remarkable performance across a wide range of language processing tasks. However, this success comes at the cost of substantial computation and memory requirements, which significantly impedes their deployment in resource-constrained environments. To address this challenge, this work introduces an automation framework that leverages weight pruning and low-bit quantization, and presents a hardware-software co-design method that generates accelerators on the Field-Programmable Gate Array (FPGA) platform. In particular, we implement a unified pipeline that applies N:M structured pruning and 4-bit integer quantization to reduce the memory footprint, followed by optimized dequantization and matrix multiplication to enhance LLM inference on several hardware platforms, including CPUs, NVIDIA GPUs with Dense and 2:4 Sparse Tensor Cores, and a custom systolic-array-based FPGA accelerator. Utilizing 2:4 sparsity combined with quantization on 4096×40964096 \times 4096 matrices, our approach achieves a reduction of up to 4×4\times in weight storage and a 1.71×1.71\times speedup in matrix multiplication, yielding a 1.29×1.29\times end-to-end latency reduction compared to dense GPU baselines. Scaling analysis on the LLaMA-7B model further shows that structured sparsity enhances the throughput per token by 1.36×1.36\times. These results demonstrate the synergy of fine-grained N:M sparsity and quantization for enabling efficient and deployable LLM inference, while the proposed FPGA accelerator offers a flexible architectural path for supporting a broader class of sparsity patterns beyond the fixed 2:4 hardware constraints.

View on arXiv
Comments on this paper