ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.15210
39
0

Integrating Symbolic Execution into the Fine-Tuning of Code-Generating LLMs

21 April 2025
Marina Sakharova
Abhinav Anand
Mira Mezini
ArXivPDFHTML
Abstract

Code-generating Large Language Models (LLMs) have become essential tools in modern software development, enhancing productivity and accelerating development. This paper aims to investigate the fine-tuning of code-generating LLMs using Reinforcement Learning and Direct Preference Optimization, further improving their performance. To achieve this, we enhance the training data for the reward model with the help of symbolic execution techniques, ensuring more comprehensive and objective data. With symbolic execution, we create a custom dataset that better captures the nuances in code evaluation. Our reward models, fine-tuned on this dataset, demonstrate significant improvements over the baseline, CodeRL, in estimating the quality of generated code. Our code-generating LLMs, trained with the help of reward model feedback, achieve similar results compared to the CodeRL benchmark.

View on arXiv
@article{sakharova2025_2504.15210,
  title={ Integrating Symbolic Execution into the Fine-Tuning of Code-Generating LLMs },
  author={ Marina Sakharova and Abhinav Anand and Mira Mezini },
  journal={arXiv preprint arXiv:2504.15210},
  year={ 2025 }
}
Comments on this paper