ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.04047
50
5

AutoParLLM: GNN-guided Context Generation for Zero-Shot Code Parallelization using LLMs

20 February 2025
Quazi Ishtiaque Mahmud
Ali TehraniJamsaz
Hung Phan
Le Chen
Mihai Capota
Theodore L. Willke
Nesreen Ahmed
Ali Jannesari
ArXivPDFHTML
Abstract

In-Context Learning (ICL) has been shown to be a powerful technique to augment the capabilities of LLMs for a diverse range of tasks. This work proposes \ourtool, a novel way to generate context using guidance from graph neural networks (GNNs) to generate efficient parallel codes. We evaluate \ourtool \xspace{} on 121212 applications from two well-known benchmark suites of parallel codes: NAS Parallel Benchmark and Rodinia Benchmark. Our results show that \ourtool \xspace{} improves the state-of-the-art LLMs (e.g., GPT-4) by 19.9\% in NAS and 6.48\% in Rodinia benchmark in terms of CodeBERTScore for the task of parallel code generation. Moreover, \ourtool \xspace{} improves the ability of the most powerful LLM to date, GPT-4, by achieving ≈\approx≈17\% (on NAS benchmark) and ≈\approx≈16\% (on Rodinia benchmark) better speedup. In addition, we propose \ourscore \xspace{} for evaluating the quality of the parallel code and show its effectiveness in evaluating parallel codes. \ourtool \xspace is available atthis https URL.

View on arXiv
@article{mahmud2025_2310.04047,
  title={ AutoParLLM: GNN-guided Context Generation for Zero-Shot Code Parallelization using LLMs },
  author={ Quazi Ishtiaque Mahmud and Ali TehraniJamsaz and Hung Phan and Le Chen and Mihai Capotă and Theodore Willke and Nesreen K. Ahmed and Ali Jannesari },
  journal={arXiv preprint arXiv:2310.04047},
  year={ 2025 }
}
Comments on this paper