Recent advances in large language models (LLMs) have shown impressive progress in mathematical reasoning tasks. However, current evaluation benchmarks predominantly focus on the accuracy of final answers, often overlooking the crucial logical rigor for mathematical problem solving. The claim that state-of-the-art LLMs can solve Math Olympiad-level problems requires closer examination. To explore this, we conducted both qualitative and quantitative human evaluations of proofs generated by LLMs, and developed a schema for automatically assessing their reasoning capabilities. Our study reveals that current LLMs fall significantly short of solving challenging Olympiad-level problems and frequently fail to distinguish correct mathematical reasoning from clearly flawed solutions. Our analyses demonstrate that the occasional correct final answers provided by LLMs often result from pattern recognition or heuristic shortcuts rather than genuine mathematical reasoning. These findings underscore the substantial gap between LLM performance and human expertise in advanced mathematical reasoning and highlight the importance of developing benchmarks that prioritize the soundness of the reasoning used to arrive at an answer rather than the mere correctness of the final answers.
View on arXiv@article{mahdavi2025_2504.01995, title={ Brains vs. Bytes: Evaluating LLM Proficiency in Olympiad Mathematics }, author={ Hamed Mahdavi and Alireza Hashemi and Majid Daliri and Pegah Mohammadipour and Alireza Farhadi and Samira Malek and Yekta Yazdanifard and Amir Khasahmadi and Vasant Honavar }, journal={arXiv preprint arXiv:2504.01995}, year={ 2025 } }