ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2509.26574
78
2
v1v2v3 (latest)

Probing the Critical Point (CritPt) of AI Reasoning: a Frontier Physics Research Benchmark

30 September 2025
Minhui Zhu
Minyang Tian
Xiaocheng Yang
Tianci Zhou
Lifan Yuan
Penghao Zhu
Eli Chertkov
Shengyan Liu
L. Yuan
Ziming Ji
Indranil Das
Junyi Cao
Yufeng Du
Jinchen He
Yifan Su
Jiabin Yu
Yikun Jiang
Y. Zhang
Chang Liu
Ze-Min Huang
Weizhen Jia
Xinan Chen
Peixue Wu
Y. Wang
Juntai Zhou
Yong Zhao
Farshid Jafarpour
Jessie Shelton
Aaron Young
John Bartolotta
Wenchao Xu
Yue Sun
Anjun Chu
Victor Colussi
Chris Akers
Nathan Brooks
Wenbo Fu
Christopher Wilson
Jinchao Zhao
Marvin Qi
Anqi Mu
Y. Yang
Allen Zang
Yang Lyu
Peizhi Mai
Xuefei Guo
Luyu Gao
Z. Yang
Luyu Gao
Dmytro Bandak
Yaïr Hein
Yonatan Kahn
Kevin Zhou
John Drew Wilson
Jarrod T. Reilly
Di Luo
Daniel Inafuku
Hao Tong
L. Yang
Ruixing Zhang
X. Wang
Ofir Press
Nicolas Chia
Eliu A. Huerta
    LRMELM
ArXiv (abs)PDFHTMLHuggingFace (1 upvotes)Github (1353★)
Main:23 Pages
6 Figures
Bibliography:16 Pages
8 Tables
Abstract

While large language models (LLMs) with reasoning capabilities are progressing rapidly on high-school math competitions and coding, can they reason effectively through complex, open-ended challenges found in frontier physics research? And crucially, what kinds of reasoning tasks do physicists want LLMs to assist with? To address these questions, we present the CritPt (Complex Research using Integrated Thinking - Physics Test, pronounced "critical point"), the first benchmark designed to test LLMs on unpublished, research-level reasoning tasks that broadly covers modern physics research areas, including condensed matter, quantum physics, atomic, molecular & optical physics, astrophysics, high energy physics, mathematical physics, statistical physics, nuclear physics, nonlinear dynamics, fluid dynamics and biophysics. CritPt consists of 71 composite research challenges designed to simulate full-scale research projects at the entry level, which are also decomposed to 190 simpler checkpoint tasks for more fine-grained insights. All problems are newly created by 50+ active physics researchers based on their own research. Every problem is hand-curated to admit a guess-resistant and machine-verifiable answer and is evaluated by an automated grading pipeline heavily customized for advanced physics-specific output formats. We find that while current state-of-the-art LLMs show early promise on isolated checkpoints, they remain far from being able to reliably solve full research-scale challenges: the best average accuracy among base models is only 5.7%, achieved by GPT-5 (high), moderately rising to around 10% when equipped with coding tools. Through the realistic yet standardized evaluation offered by CritPt, we highlight a large disconnect between current model capabilities and realistic physics research demands, offering a foundation to guide the development of scientifically grounded AI tools.

View on arXiv
Comments on this paper