ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.07861
11
0

Scalable LLM Math Reasoning Acceleration with Low-rank Distillation

8 May 2025
Harry Dong
Bilge Acun
Beidi Chen
Yuejie Chi
    LRM
ArXivPDFHTML
Abstract

Due to long generations, large language model (LLM) math reasoning demands significant computational resources and time. While many existing efficient inference methods have been developed with excellent performance preservation on language tasks, they often severely degrade math performance. In this paper, we propose Caprese, a low-cost distillation method to recover lost capabilities from deploying efficient inference methods, focused primarily in feedforward blocks. With original weights unperturbed, roughly 1% of additional parameters, and only 20K synthetic training samples, we are able to recover much if not all of the math capabilities lost from efficient inference for thinking LLMs and without harm to language tasks for instruct LLMs. Moreover, Caprese slashes the number of active parameters (~2B cut for Gemma 2 9B and Llama 3.1 8B) and integrates cleanly into existing model layers to reduce latency (>11% reduction to generate 2048 tokens with Qwen 2.5 14B) while encouraging response brevity.

View on arXiv
@article{dong2025_2505.07861,
  title={ Scalable LLM Math Reasoning Acceleration with Low-rank Distillation },
  author={ Harry Dong and Bilge Acun and Beidi Chen and Yuejie Chi },
  journal={arXiv preprint arXiv:2505.07861},
  year={ 2025 }
}
Comments on this paper