312

Assessing Code Understanding in LLMs

Formal Techniques for (Networked and) Distributed Systems (FTNDS), 2025
Main:15 Pages
3 Figures
Bibliography:2 Pages
7 Tables
Appendix:5 Pages
Abstract

We present an empirical evaluation of Large Language Models in code understanding associated with non-trivial, semantic-preserving program transformations such as copy propagation or constant folding. Our findings show that LLMs fail to judge semantic equivalence in approximately 41\% of cases when no context is provided and in 29\% when given a simple generic context. To improve accuracy, we advocate integrating LLMs with code-optimization tools to enhance training and facilitate more robust program understanding.

View on arXiv
Comments on this paper