234

Multiple-Choice Questions are Efficient and Robust LLM Evaluators

Abstract

We present GSM-MC and MATH-MC, two multiple-choice (MC) datasets constructed by collecting answers and incorrect predictions on GSM8K and MATH from 60 open-source models. Through extensive experiments, we show that LLMs' performance on the MC versions of these two popular benchmarks is strongly correlated with their performance on the original versions and is quite robust to distractor choices and option orders, while the evaluation time is reduced by a factor of up to 30. Following a similar procedure, we introduce PythonIO, a new program output prediction MC dataset constructed from two other popular LLM evaluation benchmarks, HumanEval and MBPP. Our data and code are available at https://github.com/Geralt-Targaryen/MC-Evaluation.

View on arXiv
Comments on this paper