This paper investigates a critical aspect of large language model (LLM) performance: the optimal formatting of classification task options in prompts. Through an extensive experimental study, we compared two selection formats -- bullet points and plain English -- to determine their impact on model performance. Our findings suggest that presenting options via bullet points generally yields better results, although there are some exceptions. Furthermore, our research highlights the need for continued exploration of option formatting to drive further improvements in model performance.
View on arXiv@article{han2025_2503.06926, title={ Effect of Selection Format on LLM Performance }, author={ Yuchen Han and Yucheng Wu and Jeffrey Willard }, journal={arXiv preprint arXiv:2503.06926}, year={ 2025 } }