Low Latency Transformer Inference on FPGAs for Physics Applications with
hls4ml
Journal of Instrumentation (JINST), 2024
Shih-Chieh Hsu
- AI4CE
Main:9 Pages
14 Figures
Bibliography:2 Pages
4 Tables
Abstract
This study presents an efficient implementation of transformer architectures in Field-Programmable Gate Arrays(FPGAs) using hls4ml. We demonstrate the strategy for implementing the multi-head attention, softmax, and normalization layer and evaluate three distinct models. Their deployment on VU13P FPGA chip achieved latency less than 2us, demonstrating the potential for real-time applications. HLS4ML compatibility with any TensorFlow-built transformer model further enhances the scalability and applicability of this work. Index Terms: FPGAs, machine learning, transformers, high energy physics, LIGO
View on arXivComments on this paper
