PhD Knowledge Not Required: A Reasoning Challenge for Large Language Models
- ReLMELMLRM
Main:9 Pages
12 Figures
Bibliography:2 Pages
1 Tables
Appendix:6 Pages
Abstract
Existing benchmarks for frontier models often test specialized, "PhD-level" knowledge that is difficult for non-experts to grasp. In contrast, we present a benchmark with 594 problems based on the NPR Sunday Puzzle Challenge that requires only general knowledge. Our benchmark is challenging for both humans and models; however correct solutions are easy to verify, and models' mistakes are easy to spot. As LLMs are more widely deployed in society, we believe it is useful to develop benchmarks for frontier models that humans can understand without the need for deep domain expertise.
View on arXivComments on this paper
