49
1

SciHorizon: Benchmarking AI-for-Science Readiness from Scientific Data to Large Language Models

Abstract

In recent years, the rapid advancement of Artificial Intelligence (AI) technologies, particularly Large Language Models (LLMs), has revolutionized the paradigm of scientific discovery, establishing AI-for-Science (AI4Science) as a dynamic and evolving field. However, there is still a lack of an effective framework for the overall assessment of AI4Science, particularly from a holistic perspective on data quality and model capability. Therefore, in this study, we propose SciHorizon, a comprehensive assessment framework designed to benchmark the readiness of AI4Science from both scientific data and LLM perspectives. First, we introduce a generalizable framework for assessing AI-ready scientific data, encompassing four key dimensions: Quality, FAIRness, Explainability, and Compliance which are subdivided into 15 sub-dimensions. Drawing on data resource papers published between 2018 and 2023 in peer-reviewed journals, we present recommendation lists of AI-ready datasets for both Earth and Life Sciences, making a novel and original contribution to the field. Concurrently, to assess the capabilities of LLMs across multiple scientific disciplines, we establish 16 assessment dimensions based on five core indicators Knowledge, Understanding, Reasoning, Multimodality, and Values spanning Mathematics, Physics, Chemistry, Life Sciences, and Earth and Space Sciences. Using the developed benchmark datasets, we have conducted a comprehensive evaluation of over 20 representative open-source and closed source LLMs. All the results are publicly available and can be accessed online atthis http URL.

View on arXiv
@article{qin2025_2503.13503,
  title={ SciHorizon: Benchmarking AI-for-Science Readiness from Scientific Data to Large Language Models },
  author={ Chuan Qin and Xin Chen and Chengrui Wang and Pengmin Wu and Xi Chen and Yihang Cheng and Jingyi Zhao and Meng Xiao and Xiangchao Dong and Qingqing Long and Boya Pan and Han Wu and Chengzan Li and Yuanchun Zhou and Hui Xiong and Hengshu Zhu },
  journal={arXiv preprint arXiv:2503.13503},
  year={ 2025 }
}
Comments on this paper