LIFT: LLM-Based Pragma Insertion for HLS via GNN Supervised Fine-Tuning

FPGAs are increasingly adopted in datacenter environments for their reconfigurability and energy efficiency. High-Level Synthesis (HLS) tools have eased FPGA programming by raising the abstraction level from RTL to untimed C/C++, yet attaining high performance still demands expert knowledge and iterative manual insertion of optimization pragmas to modify the microarchitecture. To address this challenge, we propose LIFT, a large language model (LLM)-based coding assistant for HLS that automatically generates performance-critical pragmas given a C/C++ design. We fine-tune the LLM by tightly integrating and supervising the training process with a graph neural network (GNN), combining the sequential modeling capabilities of LLMs with the structural and semantic understanding of GNNs necessary for reasoning over code and its control/data dependencies. On average, LIFT produces designs that improve performance by 3.52x and 2.16x than prior state-of the art AutoDSE and HARP respectively, and 66x than GPT-4o.
View on arXiv@article{prakriya2025_2504.21187, title={ LIFT: LLM-Based Pragma Insertion for HLS via GNN Supervised Fine-Tuning }, author={ Neha Prakriya and Zijian Ding and Yizhou Sun and Jason Cong }, journal={arXiv preprint arXiv:2504.21187}, year={ 2025 } }