ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.16771
32
0

On Explaining (Large) Language Models For Code Using Global Code-Based Explanations

21 March 2025
David Nader-Palacio
Dipin Khati
Daniel Rodríguez-Cárdenas
Alejandro Velasco
Denys Poshyvanyk
    LRM
ArXivPDFHTML
Abstract

In recent years, Language Models for Code (LLM4Code) have significantly changed the landscape of software engineering (SE) on downstream tasks, such as code generation, by making software development more efficient. Therefore, a growing interest has emerged in further evaluating these Language Models to homogenize the quality assessment of generated code. As the current evaluation process can significantly overreact on accuracy-based metrics, practitioners often seek methods to interpret LLM4Code outputs beyond canonical benchmarks. While the majority of research reports on code generation effectiveness in terms of expected ground truth, scant attention has been paid to LLMs' explanations. In essence, the decision-making process to generate code is hard to interpret. To bridge this evaluation gap, we introduce code rationales (CodeQQQ), a technique with rigorous mathematical underpinning, to identify subsets of tokens that can explain individual code predictions. We conducted a thorough Exploratory Analysis to demonstrate the method's applicability and a User Study to understand the usability of code-based explanations. Our evaluation demonstrates that CodeQQQ is a powerful interpretability method to explain how (less) meaningful input concepts (i.e., natural language particle `at') highly impact output generation. Moreover, participants of this study highlighted CodeQQQ's ability to show a causal relationship between the input and output of the model with readable and informative explanations on code completion and test generation tasks. Additionally, CodeQQQ also helps to uncover model rationale, facilitating comparison with a human rationale to promote a fair level of trust and distrust in the model.

View on arXiv
@article{palacio2025_2503.16771,
  title={ On Explaining (Large) Language Models For Code Using Global Code-Based Explanations },
  author={ David N. Palacio and Dipin Khati and Daniel Rodriguez-Cardenas and Alejandro Velasco and Denys Poshyvanyk },
  journal={arXiv preprint arXiv:2503.16771},
  year={ 2025 }
}
Comments on this paper