245
v1v2 (latest)

Evaluating Language Model Math Reasoning via Grounding in Educational Curricula

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024
Main:9 Pages
23 Figures
Bibliography:4 Pages
8 Tables
Appendix:17 Pages
Abstract

Our work presents a novel angle for evaluating language models' (LMs) mathematical abilities, by investigating whether they can discern skills and concepts enabled by math content. We contribute two datasets: one consisting of 385 fine-grained descriptions of K-12 math skills and concepts, or standards, from Achieve the Core (ATC), and another of 9.9K problems labeled with these standards (MathFish). Working with experienced teachers, we find that LMs struggle to tag and verify standards linked to problems, and instead predict labels that are close to ground truth, but differ in subtle ways. We also show that LMs often generate problems that do not fully align with standards described in prompts. Finally, we categorize problems in GSM8k using math standards, allowing us to better understand why some problems are more difficult to solve for models than others.

View on arXiv
Comments on this paper