38
0

NorEval: A Norwegian Language Understanding and Generation Evaluation Benchmark

Abstract

This paper introduces NorEval, a new and comprehensive evaluation suite for large-scale standardized benchmarking of Norwegian generative language models (LMs). NorEval consists of 24 high-quality human-created datasets -- of which five are created from scratch. In contrast to existing benchmarks for Norwegian, NorEval covers a broad spectrum of task categories targeting Norwegian language understanding and generation, establishes human baselines, and focuses on both of the official written standards of the Norwegian language: Bokmål and Nynorsk. All our datasets and a collection of over 100 human-written prompts are integrated into LM Evaluation Harness, ensuring flexible and reproducible evaluation. We describe the NorEval design and present the results of benchmarking 19 open-source pre-trained and instruction-tuned LMs for Norwegian in various scenarios. Our benchmark, evaluation framework, and annotation materials are publicly available.

View on arXiv
@article{mikhailov2025_2504.07749,
  title={ NorEval: A Norwegian Language Understanding and Generation Evaluation Benchmark },
  author={ Vladislav Mikhailov and Tita Enstad and David Samuel and Hans Christian Farsethås and Andrey Kutuzov and Erik Velldal and Lilja Øvrelid },
  journal={arXiv preprint arXiv:2504.07749},
  year={ 2025 }
}
Comments on this paper