137

Learning to Reason in 13 Parameters

John X. Morris
Niloofar Mireshghallah
Mark Ibrahim
Saeed Mahloujifar
Main:10 Pages
9 Figures
Bibliography:3 Pages
1 Tables
Appendix:1 Pages
Abstract

Recent research has shown that language models can learn to \textit{reason}, often via reinforcement learning. Some work even trains low-rank parameterizations for reasoning, but conventional LoRA cannot scale below the model dimension. We question whether even rank=1 LoRA is necessary for learning to reason and propose TinyLoRA, a method for scaling low-rank adapters to sizes as small as one parameter. Within our new parameterization, we are able to train the 8B parameter size of Qwen2.5 to 91\% accuracy on GSM8K with only 13 trained parameters in bf16 (26 total bytes). We find this trend holds in general: we are able to recover 90\% of performance improvements while training 1000x1000x fewer parameters across a suite of more difficult learning-to-reason benchmarks such as AIME, AMC, and MATH500. Notably, we are only able to achieve such strong performance with RL: models trained using SFT require 1001000x100-1000x larger updates to reach the same performance.

View on arXiv
Comments on this paper