48
0

Can LLMs Enable Verification in Mainstream Programming?

Abstract

Although formal methods are capable of producing reliable software, they have seen minimal adoption in everyday programming. Automatic code generation using large language models is becoming increasingly widespread, but it rarely considers producing strong correctness guarantees. In this study, we explore the ability of LLMs to produce verified code in three verification languages (Dafny, Nagini, and Verus). To do so, we use manually curated datasets derived from the state-ofthe-art Python benchmark, HumanEval. We also assess what types of information are sufficient to achieve good-quality results.

View on arXiv
@article{shefer2025_2503.14183,
  title={ Can LLMs Enable Verification in Mainstream Programming? },
  author={ Aleksandr Shefer and Igor Engel and Stanislav Alekseev and Daniil Berezun and Ekaterina Verbitskaia and Anton Podkopaev },
  journal={arXiv preprint arXiv:2503.14183},
  year={ 2025 }
}
Comments on this paper