ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.01638
22
3

LASSI: An LLM-based Automated Self-Correcting Pipeline for Translating Parallel Scientific Codes

30 June 2024
M. Dearing
Yiheng Tao
Xingfu Wu
Z. Lan
V. Taylor
ArXivPDFHTML
Abstract

This paper addresses the problem of providing a novel approach to sourcing significant training data for LLMs focused on science and engineering. In particular, a crucial challenge is sourcing parallel scientific codes in the ranges of millions to billions of codes. To tackle this problem, we propose an automated pipeline framework called LASSI, designed to translate between parallel programming languages by bootstrapping existing closed- or open-source LLMs. LASSI incorporates autonomous enhancement through self-correcting loops where errors encountered during the compilation and execution of generated code are fed back to the LLM through guided prompting for debugging and refactoring. We highlight the bi-directional translation of existing GPU benchmarks between OpenMP target offload and CUDA to validate LASSI. The results of evaluating LASSI with different application codes across four LLMs demonstrate the effectiveness of LASSI for generating executable parallel codes, with 80% of OpenMP to CUDA translations and 85% of CUDA to OpenMP translations producing the expected output. We also observe approximately 78% of OpenMP to CUDA translations and 62% of CUDA to OpenMP translations execute within 10% of or at a faster runtime than the original benchmark code in the same language.

View on arXiv
@article{dearing2025_2407.01638,
  title={ LASSI: An LLM-based Automated Self-Correcting Pipeline for Translating Parallel Scientific Codes },
  author={ Matthew T. Dearing and Yiheng Tao and Xingfu Wu and Zhiling Lan and Valerie Taylor },
  journal={arXiv preprint arXiv:2407.01638},
  year={ 2025 }
}
Comments on this paper