ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.16706
48
1

DISC: Dynamic Decomposition Improves LLM Inference Scaling

23 February 2025
Jonathan Light
Wei Cheng
Wu Yue
Masafumi Oyamada
Mengdi Wang
Santiago Paternain
Haifeng Chen
    ReLM
    LRM
ArXivPDFHTML
Abstract

Many inference scaling methods work by breaking a problem into smaller steps (or groups of tokens), then sampling and choosing the best next step. However, these steps and their sizes are usually predetermined based on human intuition or domain knowledge. This paper introduces dynamic decomposition, a method that automatically and adaptively splits solution and reasoning traces into steps during inference. This approach improves computational efficiency by focusing more resources on difficult steps, breaking them down further and prioritizing their sampling. Experiments on coding and math benchmarks (APPS, MATH, and LiveCodeBench) show that dynamic decomposition performs better than static methods, which rely on fixed steps like token-level, sentence-level, or single-step decompositions. These results suggest that dynamic decomposition can enhance many inference scaling techniques.

View on arXiv
@article{light2025_2502.16706,
  title={ DISC: Dynamic Decomposition Improves LLM Inference Scaling },
  author={ Jonathan Light and Wei Cheng and Wu Yue and Masafumi Oyamada and Mengdi Wang and Santiago Paternain and Haifeng Chen },
  journal={arXiv preprint arXiv:2502.16706},
  year={ 2025 }
}
Comments on this paper