45
9

Reasoning with Latent Thoughts: On the Power of Looped Transformers

Abstract

Large language models have shown remarkable reasoning abilities and scaling laws suggest that large parameter count, especially along the depth axis, is the primary driver. In this work, we make a stronger claim -- many reasoning problems require a large depth but not necessarily many parameters. This unlocks a novel application of looped models for reasoning. Firstly, we show that for many synthetic reasoning problems like addition, pp-hop induction, and math problems, a kk-layer transformer looped LL times nearly matches the performance of a kLkL-layer non-looped model, and is significantly better than a kk-layer model. This is further corroborated by theoretical results showing that many such reasoning problems can be solved via iterative algorithms, and thus, can be solved effectively using looped models with nearly optimal depth. Perhaps surprisingly, these benefits also translate to practical settings of language modeling -- on many downstream reasoning tasks, a language model with kk-layers looped LL times can be competitive to, if not better than, a kLkL-layer language model. In fact, our empirical analysis reveals an intriguing phenomenon: looped and non-looped models exhibit scaling behavior that depends on their effective depth, akin to the inference-time scaling of chain-of-thought (CoT) reasoning. We further elucidate the connection to CoT reasoning by proving that looped models implicitly generate latent thoughts and can simulate TT steps of CoT with TT loops. Inspired by these findings, we also present an interesting dichotomy between reasoning and memorization, and design a looping-based regularization that is effective on both fronts.

View on arXiv
@article{saunshi2025_2502.17416,
  title={ Reasoning with Latent Thoughts: On the Power of Looped Transformers },
  author={ Nikunj Saunshi and Nishanth Dikkala and Zhiyuan Li and Sanjiv Kumar and Sashank J. Reddi },
  journal={arXiv preprint arXiv:2502.17416},
  year={ 2025 }
}
Comments on this paper