A Simple Ensemble Strategy for LLM Inference: Towards More Stable Text Classification

Abstract
With the advance of large language models (LLMs), LLMs have been utilized for the various tasks. However, the issues of variability and reproducibility of results from each trial of LLMs have been largely overlooked in existing literature while actual human annotation uses majority voting to resolve disagreements among annotators. Therefore, this study introduces the straightforward ensemble strategy to a sentiment analysis using LLMs. As the results, we demonstrate that the ensemble of multiple inference using medium-sized LLMs produces more robust and accurate results than using a large model with a single attempt with reducing RMSE by 18.6%.
View on arXiv@article{niimi2025_2504.18884, title={ A Simple Ensemble Strategy for LLM Inference: Towards More Stable Text Classification }, author={ Junichiro Niimi }, journal={arXiv preprint arXiv:2504.18884}, year={ 2025 } }
Comments on this paper