ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.14782
115
16

Lessons from the Trenches on Reproducible Evaluation of Language Models

23 May 2024
Stella Biderman
Hailey Schoelkopf
Lintang Sutawika
Leo Gao
J. Tow
Baber Abbasi
Alham Fikri Aji
Pawan Sasanka Ammanamanchi
Sid Black
Jordan Clive
Anthony DiPofi
Julen Etxaniz
Benjamin Fattori
Jessica Zosa Forde
Charles Foster
Jeffrey Hsu
Mimansa Jaiswal
Wilson Y. Lee
Haonan Li
Charles Lovering
Niklas Muennighoff
Ellie Pavlick
Jason Phang
Aviya Skowron
Samson Tan
Xiangru Tang
Kevin A. Wang
Genta Indra Winata
Franccois Yvon
Andy Zou
    ELM
    ALM
ArXivPDFHTML
Abstract

Effective evaluation of language models remains an open challenge in NLP. Researchers and engineers face methodological issues such as the sensitivity of models to evaluation setup, difficulty of proper comparisons across methods, and the lack of reproducibility and transparency. In this paper we draw on three years of experience in evaluating large language models to provide guidance and lessons for researchers. First, we provide an overview of common challenges faced in language model evaluation. Second, we delineate best practices for addressing or lessening the impact of these challenges on research. Third, we present the Language Model Evaluation Harness (lm-eval): an open source library for independent, reproducible, and extensible evaluation of language models that seeks to address these issues. We describe the features of the library as well as case studies in which the library has been used to alleviate these methodological concerns.

View on arXiv
Comments on this paper