Investigating Large Language Models in Diagnosing Students' Cognitive Skills in Math Problem-solving

Mathematics learning entails mastery of both content knowledge and cognitive processing of knowing, applying, and reasoning with it. Automated math assessment primarily has focused on grading students' exhibition of content knowledge by finding textual evidence, such as specific numbers, formulas, and statements. Recent advancements in problem-solving, image recognition, and reasoning capabilities of large language models (LLMs) show promise for nuanced evaluation of students' cognitive skills. Diagnosing cognitive skills needs to infer students' thinking processes beyond textual evidence, which is an underexplored task in LLM-based automated assessment. In this work, we investigate how state-of-the-art LLMs diagnose students' cognitive skills in mathematics. We constructed MathCog, a novel benchmark dataset comprising 639 student responses to 110 expert-curated middle school math problems, each annotated with detailed teachers' diagnoses based on cognitive skill checklists. Using MathCog, we evaluated 16 closed and open LLMs of varying model sizes and vendors. Our evaluation reveals that even the state-of-the-art LLMs struggle with the task, all F1 scores below 0.5, and tend to exhibit strong false confidence for incorrect cases (). We also found that model size positively correlates with the diagnosis performance (). Finally, we discuss the implications of these findings, the overconfidence issue, and directions for improving automated cognitive skill diagnosis.
View on arXiv@article{jin2025_2504.00843, title={ Investigating Large Language Models in Diagnosing Students' Cognitive Skills in Math Problem-solving }, author={ Hyoungwook Jin and Yoonsu Kim and Dongyun Jung and Seungju Kim and Kiyoon Choi and Jinho Son and Juho Kim }, journal={arXiv preprint arXiv:2504.00843}, year={ 2025 } }