NeurIPS 2023 LLM Efficiency Fine-tuning Competition

Our analysis of the NeurIPS 2023 large language model (LLM) fine-tuning competition revealed the following trend: top-performing models exhibit significant overfitting on benchmark datasets, mirroring the broader issue of benchmark overfitting on popular leaderboards and that data curation is essential in order to get a high performing LLM. The competition, which consisted of two stages - an open evaluation stage with publicly available tasks and a closed evaluation stage with unseen tasks - allowed us to assess the generalizability of fine-tuned LLMs. Our results highlight the limitations of current benchmark-based evaluation schemes for generative models and demonstrate the need for more robust evaluation methods. Notably, the winning submissions utilized standard open-source libraries and focused primarily on data curation. To facilitate further research and promote reproducibility, we release all competition entries, Docker files, and evaluation infrastructure, providing a valuable resource for the community to explore fine-tuning, overfitting, and reproducibility in LLMs.
View on arXiv@article{saroufim2025_2503.13507, title={ NeurIPS 2023 LLM Efficiency Fine-tuning Competition }, author={ Mark Saroufim and Yotam Perlitz and Leshem Choshen and Luca Antiga and Greg Bowyer and Christian Puhrsch and Driss Guessous and Supriya Rao and Geeta Chauhan and Ashvini Kumar and Jindal Pawan Kumar and Rajpoot Ankur Parikh and Joe Isaacson and Weiwei Yang }, journal={arXiv preprint arXiv:2503.13507}, year={ 2025 } }