Within-Model vs Between-Prompt Variability in Large Language Models for Creative Tasks
Jennifer Haase
Jana Gonnermann-Müller
Paul H. P. Hanel
Nicolas Leins
Thomas Kosch
Jan Mendling
Sebastian Pokutta
- LRM
Main:2 Pages
14 Figures
Bibliography:1 Pages
6 Tables
Appendix:15 Pages
Abstract
How much of LLM output variance is explained by prompts versus model choice versus stochasticity through sampling? We answer this by evaluating 12 LLMs on 10 creativity prompts with 100 samples each (N = 12,000). For output quality (originality), prompts explain 36.43% of variance, comparable to model choice (40.94%). But for output quantity (fluency), model choice (51.25%) and within-LLM variance (33.70%) dominate, with prompts explaining only 4.22%. Prompts are powerful levers for steering output quality, but given the substantial within-LLM variance (10-34%), single-sample evaluations risk conflating sampling noise with genuine prompt or model effects.
View on arXivComments on this paper
