60
1

Evaluating the Meta- and Object-Level Reasoning of Large Language Models for Question Answering

Abstract

Large Language Models (LLMs) excel in natural language tasks but still face challenges in Question Answering (QA) tasks requiring complex, multi-step reasoning. We outline the types of reasoning required in some of these tasks, and reframe them in terms of meta-level reasoning (akin to high-level strategic reasoning or planning) and object-level reasoning (embodied in lower-level tasks such as mathematical reasoning). Franklin, a novel dataset with requirements of meta- and object-level reasoning, is introduced and used along with three other datasets to evaluate four LLMs at question answering tasks requiring multiple steps of reasoning. Results from human annotation studies suggest LLMs demonstrate meta-level reasoning with high frequency, but struggle with object-level reasoning tasks in some of the datasets used. Additionally, evidence suggests that LLMs find the object-level reasoning required for the questions in the Franklin dataset challenging, yet they do exhibit strong performance with respect to the meta-level reasoning requirements.

View on arXiv
@article{ferguson2025_2502.10338,
  title={ Evaluating the Meta- and Object-Level Reasoning of Large Language Models for Question Answering },
  author={ Nick Ferguson and Liane Guillou and Alan Bundy and Kwabena Nuamah },
  journal={arXiv preprint arXiv:2502.10338},
  year={ 2025 }
}
Comments on this paper