990
v1v2v3v4v5 (latest)

Non-Determinism of "Deterministic" LLM Settings

Guru Rajan Rajagopal
Adam Sloan
Tomasz Tudrej
Ferhan Ture
Zhe Wu
Lixinyu Xu
Breck Baldwin
Main:8 Pages
12 Figures
Bibliography:4 Pages
4 Tables
Appendix:3 Pages
Abstract

LLM (large language model) practitioners commonly notice that outputs can vary for the same inputs under settings expected to be deterministic. Yet the questions of how pervasive this is, and with what impact on results, have not to our knowledge been systematically investigated. We investigate non-determinism in five LLMs configured to be deterministic when applied to eight common tasks in across 10 runs, in both zero-shot and few-shot settings. We see accuracy variations up to 15% across naturally occurring runs with a gap of best possible performance to worst possible performance up to 70%. In fact, none of the LLMs consistently delivers repeatable accuracy across all tasks, much less identical output strings. Sharing preliminary results with insiders has revealed that non-determinism perhaps essential to the efficient use of compute resources via co-mingled data in input buffers so this issue is not going away anytime soon. To better quantify our observations, we introduce metrics focused on quantifying determinism, TARr@N for the total agreement rate at N runs over raw output, and TARa@N for total agreement rate of parsed-out answers. Our code and data are publicly available atthis https URL.

View on arXiv
Comments on this paper