Low Latency Transformer Inference on FPGAs for Physics Applications with hls4ml
Zhixing Jiang
Dennis Yin
Yihui Chen
Elham E Khoda
Scott Hauck
Shih-Chieh Hsu
E. Govorkova
Philip C. Harris
Vladimir Loncar
Eric A. Moreno

Abstract
This study presents an efficient implementation of transformer architectures in Field-Programmable Gate Arrays(FPGAs) using hls4ml. We demonstrate the strategy for implementing the multi-head attention, softmax, and normalization layer and evaluate three distinct models. Their deployment on VU13P FPGA chip achieved latency less than 2us, demonstrating the potential for real-time applications. HLS4ML compatibility with any TensorFlow-built transformer model further enhances the scalability and applicability of this work. Index Terms: FPGAs, machine learning, transformers, high energy physics, LIGO
View on arXivComments on this paper