ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.13517
41
0

CURIE: Evaluating LLMs On Multitask Scientific Long Context Understanding and Reasoning

14 March 2025
Hao Cui
Zahra Shamsi
Gowoon Cheon
Xuejian Ma
Shutong Li
Maria Tikhanovskaya
Peter C. Norgaard
N. Mudur
Martyna Plomecka
Paul Raccuglia
Yasaman Bahri
Victor V. Albert
Pranesh Srinivasan
Haining Pan
Philippe Faist
Brian Rohr
Michael J. Statt
Dan Morris
Drew Purves
Elise Kleeman
Ruth Alcantara
Matthew Abraham
Muqthar Mohammad
Ean Phing VanLee
Chenfei Jiang
Elizabeth Dorfman
Eun-Ah Kim
M. Brenner
Elizabeth Dorfman
Eun-Ah Kim
Michael P Brenner
Viren Jain
Sameera Ponda
Subhashini Venugopalan
    ELM
    LRM
ArXivPDFHTML
Abstract

Scientific problem-solving involves synthesizing information while applying expert knowledge. We introduce CURIE, a scientific long-Context Understanding,Reasoning and Information Extraction benchmark to measure the potential of Large Language Models (LLMs) in scientific problem-solving and assisting scientists in realistic workflows. This benchmark introduces ten challenging tasks with a total of 580 problems and solution pairs curated by experts in six disciplines - materials science, condensed matter physics, quantum computing, geospatial analysis, biodiversity, and proteins - covering both experimental and theoretical work-flows in science. We evaluate a range of closed and open LLMs on tasks in CURIE which requires domain expertise, comprehension of long in-context information,and multi-step reasoning. While Gemini Flash 2.0 and Claude-3 show consistent high comprehension across domains, the popular GPT-4o and command-R+ fail dramatically on protein sequencing tasks. With the best performance at 32% there is much room for improvement for all models. We hope that insights gained from CURIE can guide the future development of LLMs in sciences. Evaluation code and data are inthis https URL

View on arXiv
@article{cui2025_2503.13517,
  title={ CURIE: Evaluating LLMs On Multitask Scientific Long Context Understanding and Reasoning },
  author={ Hao Cui and Zahra Shamsi and Gowoon Cheon and Xuejian Ma and Shutong Li and Maria Tikhanovskaya and Peter Norgaard and Nayantara Mudur and Martyna Plomecka and Paul Raccuglia and Yasaman Bahri and Victor V. Albert and Pranesh Srinivasan and Haining Pan and Philippe Faist and Brian Rohr and Ekin Dogus Cubuk and Muratahan Aykol and Amil Merchant and Michael J. Statt and Dan Morris and Drew Purves and Elise Kleeman and Ruth Alcantara and Matthew Abraham and Muqthar Mohammad and Ean Phing VanLee and Chenfei Jiang and Elizabeth Dorfman and Eun-Ah Kim and Michael P Brenner and Viren Jain and Sameera Ponda and Subhashini Venugopalan },
  journal={arXiv preprint arXiv:2503.13517},
  year={ 2025 }
}
Comments on this paper