In-Context Learning (ICL) has been shown to be a powerful technique to augment the capabilities of LLMs for a diverse range of tasks. This work proposes \ourtool, a novel way to generate context using guidance from graph neural networks (GNNs) to generate efficient parallel codes. We evaluate \ourtool \xspace{} on applications from two well-known benchmark suites of parallel codes: NAS Parallel Benchmark and Rodinia Benchmark. Our results show that \ourtool \xspace{} improves the state-of-the-art LLMs (e.g., GPT-4) by 19.9\% in NAS and 6.48\% in Rodinia benchmark in terms of CodeBERTScore for the task of parallel code generation. Moreover, \ourtool \xspace{} improves the ability of the most powerful LLM to date, GPT-4, by achieving 17\% (on NAS benchmark) and 16\% (on Rodinia benchmark) better speedup. In addition, we propose \ourscore \xspace{} for evaluating the quality of the parallel code and show its effectiveness in evaluating parallel codes. \ourtool \xspace is available atthis https URL.
View on arXiv@article{mahmud2025_2310.04047, title={ AutoParLLM: GNN-guided Context Generation for Zero-Shot Code Parallelization using LLMs }, author={ Quazi Ishtiaque Mahmud and Ali TehraniJamsaz and Hung Phan and Le Chen and Mihai Capotă and Theodore Willke and Nesreen K. Ahmed and Ali Jannesari }, journal={arXiv preprint arXiv:2310.04047}, year={ 2025 } }