316

Pitfalls of Scale: Investigating the Inverse Task of Redefinition in Large Language Models

Annual Meeting of the Association for Computational Linguistics (ACL), 2025
Main:10 Pages
15 Figures
Bibliography:2 Pages
23 Tables
Appendix:13 Pages
Abstract

Inverse tasks can uncover potential reasoning gaps as Large Language Models (LLMs) scale up. In this work, we explore the redefinition task, in which we assign alternative values to well-known physical constants and units of measure, prompting LLMs to respond accordingly. Our findings show that not only does model performance degrade with scale, but its false confidence also rises. Moreover, while factors such as prompting strategies or response formatting are influential, they do not preclude LLMs from anchoring to memorized values.

View on arXiv
Comments on this paper