ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.15777
15
0

Tina: Tiny Reasoning Models via LoRA

22 April 2025
Shangshang Wang
Julian Asilis
Ömer Faruk Akgül
Enes Burak Bilgin
Ollie Liu
W. Neiswanger
    OffRL
    LRM
ArXivPDFHTML
Abstract

How cost-effectively can strong reasoning abilities be achieved in language models? Driven by this fundamental question, we present Tina, a family of tiny reasoning models achieved with high cost-efficiency. Notably, Tina demonstrates that substantial reasoning performance can be developed using only minimal resources, by applying parameter-efficient updates during reinforcement learning (RL), using low-rank adaptation (LoRA), to an already tiny 1.5B parameter base model. This minimalist approach produces models that achieve reasoning performance which is competitive with, and sometimes surpasses, SOTA RL reasoning models built upon the same base model. Crucially, this is achieved at a tiny fraction of the computational post-training cost employed by existing SOTA models. In fact, the best Tina model achieves a >20\% reasoning performance increase and 43.33\% Pass@1 accuracy on AIME24, at only \9USDpost−trainingandevaluationcost(i.e.,anestimated260xcostreduction).OurworkrevealsthesurprisingeffectivenessofefficientRLreasoningviaLoRA.Wevalidatethisacrossmultipleopen−sourcereasoningdatasetsandvariousablationsettingsstartingwithasingle,fixedsetofhyperparameters.Furthermore,wehypothesizethatthiseffectivenessandefficiencystemfromLoRArapidlyadaptingthemodeltothestructuralformatofreasoningrewardedbyRL,whilelargelypreservingthebasemodel′sunderlyingknowledge.Inserviceofaccessibilityandopenresearch,wefullyopen−sourceallcode,traininglogs,andmodelweights&checkpoints.9 USD post-training and evaluation cost (i.e., an estimated 260x cost reduction). Our work reveals the surprising effectiveness of efficient RL reasoning via LoRA. We validate this across multiple open-source reasoning datasets and various ablation settings starting with a single, fixed set of hyperparameters. Furthermore, we hypothesize that this effectiveness and efficiency stem from LoRA rapidly adapting the model to the structural format of reasoning rewarded by RL, while largely preserving the base model's underlying knowledge. In service of accessibility and open research, we fully open-source all code, training logs, and model weights \& checkpoints.9USDpost−trainingandevaluationcost(i.e.,anestimated260xcostreduction).OurworkrevealsthesurprisingeffectivenessofefficientRLreasoningviaLoRA.Wevalidatethisacrossmultipleopen−sourcereasoningdatasetsandvariousablationsettingsstartingwithasingle,fixedsetofhyperparameters.Furthermore,wehypothesizethatthiseffectivenessandefficiencystemfromLoRArapidlyadaptingthemodeltothestructuralformatofreasoningrewardedbyRL,whilelargelypreservingthebasemodel′sunderlyingknowledge.Inserviceofaccessibilityandopenresearch,wefullyopen−sourceallcode,traininglogs,andmodelweights&checkpoints.

View on arXiv
@article{wang2025_2504.15777,
  title={ Tina: Tiny Reasoning Models via LoRA },
  author={ Shangshang Wang and Julian Asilis and Ömer Faruk Akgül and Enes Burak Bilgin and Ollie Liu and Willie Neiswanger },
  journal={arXiv preprint arXiv:2504.15777},
  year={ 2025 }
}
Comments on this paper