Climate-Eval: A Comprehensive Benchmark for NLP Tasks Related to Climate Change

Climate-Eval is a comprehensive benchmark designed to evaluate natural language processing models across a broad range of tasks related to climate change. Climate-Eval aggregates existing datasets along with a newly developed news classification dataset, created specifically for this release. This results in a benchmark of 25 tasks based on 13 datasets, covering key aspects of climate discourse, including text classification, question answering, and information extraction. Our benchmark provides a standardized evaluation suite for systematically assessing the performance of large language models (LLMs) on these tasks. Additionally, we conduct an extensive evaluation of open-source LLMs (ranging from 2B to 70B parameters) in both zero-shot and few-shot settings, analyzing their strengths and limitations in the domain of climate change.
View on arXiv@article{kurfalı2025_2505.18653, title={ Climate-Eval: A Comprehensive Benchmark for NLP Tasks Related to Climate Change }, author={ Murathan Kurfalı and Shorouq Zahra and Joakim Nivre and Gabriele Messori }, journal={arXiv preprint arXiv:2505.18653}, year={ 2025 } }