82
0

AtmosSci-Bench: Evaluating the Recent Advance of Large Language Model for Atmospheric Science

Abstract

The rapid advancements in large language models (LLMs), particularly in their reasoning capabilities, hold transformative potential for addressing complex challenges in atmospheric science. However, leveraging LLMs effectively in this domain requires a robust and comprehensive evaluation benchmark. To address this need, we present AtmosSci-Bench, a novel benchmark designed to systematically assess LLM performance across five core categories of atmospheric science problems: hydrology, atmospheric dynamics, atmospheric physics, geophysics, and physical oceanography. We employ a template-based question generation framework, enabling scalable and diverse multiple-choice questions curated from graduate-level atmospheric science problems. We conduct a comprehensive evaluation of representative LLMs, categorized into four groups: instruction-tuned models, advanced reasoning models, math-augmented models, and domain-specific climate models. Our analysis provides some interesting insights into the reasoning and problem-solving capabilities of LLMs in atmospheric science. We believe AtmosSci-Bench can serve as a critical step toward advancing LLM applications in climate service by offering a standard and rigorous evaluation framework. Our source codes are currently available atthis https URL.

View on arXiv
@article{li2025_2502.01159,
  title={ AtmosSci-Bench: Evaluating the Recent Advance of Large Language Model for Atmospheric Science },
  author={ Chenyue Li and Wen Deng and Mengqian Lu and Binhang Yuan },
  journal={arXiv preprint arXiv:2502.01159},
  year={ 2025 }
}
Comments on this paper