ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.02725
  4. Cited By
Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better,
  Even Mid-Generation

Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better, Even Mid-Generation

3 October 2024
Rohin Manvi
Anikait Singh
Stefano Ermon
    SyDa
ArXivPDFHTML

Papers citing "Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better, Even Mid-Generation"

4 / 4 papers shown
Title
Scalable LLM Math Reasoning Acceleration with Low-rank Distillation
Scalable LLM Math Reasoning Acceleration with Low-rank Distillation
Harry Dong
Bilge Acun
Beidi Chen
Yuejie Chi
LRM
29
0
0
08 May 2025
Between Underthinking and Overthinking: An Empirical Study of Reasoning Length and correctness in LLMs
Between Underthinking and Overthinking: An Empirical Study of Reasoning Length and correctness in LLMs
Jinyan Su
Jennifer Healey
Preslav Nakov
Claire Cardie
LRM
144
0
0
30 Apr 2025
DISC: Dynamic Decomposition Improves LLM Inference Scaling
DISC: Dynamic Decomposition Improves LLM Inference Scaling
Jonathan Light
Wei Cheng
Wu Yue
Masafumi Oyamada
Mengdi Wang
Santiago Paternain
Haifeng Chen
ReLM
LRM
61
2
0
23 Feb 2025
Make Every Penny Count: Difficulty-Adaptive Self-Consistency for Cost-Efficient Reasoning
Make Every Penny Count: Difficulty-Adaptive Self-Consistency for Cost-Efficient Reasoning
Xinglin Wang
Shaoxiong Feng
Yiwei Li
Peiwen Yuan
Y. Zhang
Boyuan Pan
Heda Wang
Yao Hu
Kan Li
LRM
40
17
0
24 Aug 2024
1