What is an "Abstract Reasoner"? Revisiting Experiments and Arguments about Large Language Models
- LRM

Main:8 Pages
11 Figures
Bibliography:2 Pages
3 Tables
Appendix:3 Pages
Abstract
Recent work has argued that large language models (LLMs) are not "abstract reasoners", citing their poor zero-shot performance on a variety of challenging tasks as evidence. We revisit these experiments in order to add nuance to the claim. First, we show that while LLMs indeed perform poorly in a zero-shot setting, even tuning a small subset of parameters for input encoding can enable near-perfect performance. However, we also show that this finetuning does not necessarily transfer across datasets. We take this collection of empirical results as an invitation to (re-)open the discussion of what it means to be an "abstract reasoner", and why it matters whether LLMs fit the bill.
View on arXivComments on this paper