ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.08228
51
0

Investigating Execution-Aware Language Models for Code Optimization

11 March 2025
Federico Di Menna
Luca Traini
Gabriele Bavota
Vittorio Cortellessa
ArXivPDFHTML
Abstract

Code optimization is the process of enhancing code efficiency, while preserving its intended functionality. This process often requires a deep understanding of the code execution behavior at run-time to identify and address inefficiencies effectively. Recent studies have shown that language models can play a significant role in automating code optimization. However, these models may have insufficient knowledge of how code execute at run-time. To address this limitation, researchers have developed strategies that integrate code execution information into language models. These strategies have shown promise, enhancing the effectiveness of language models in various software engineering tasks. However, despite the close relationship between code execution behavior and efficiency, the specific impact of these strategies on code optimization remains largely unexplored. This study investigates how incorporating code execution information into language models affects their ability to optimize code. Specifically, we apply three different training strategies to incorporate four code execution aspects -- line executions, line coverage, branch coverage, and variable states -- into CodeT5+, a well-known language model for code. Our results indicate that execution-aware models provide limited benefits compared to the standard CodeT5+ model in optimizing code.

View on arXiv
@article{menna2025_2503.08228,
  title={ Investigating Execution-Aware Language Models for Code Optimization },
  author={ Federico Di Menna and Luca Traini and Gabriele Bavota and Vittorio Cortellessa },
  journal={arXiv preprint arXiv:2503.08228},
  year={ 2025 }
}
Comments on this paper