Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2410.02725
Cited By
Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better, Even Mid-Generation
3 October 2024
Rohin Manvi
Anikait Singh
Stefano Ermon
SyDa
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better, Even Mid-Generation"
4 / 4 papers shown
Title
Scalable LLM Math Reasoning Acceleration with Low-rank Distillation
Harry Dong
Bilge Acun
Beidi Chen
Yuejie Chi
LRM
29
0
0
08 May 2025
Between Underthinking and Overthinking: An Empirical Study of Reasoning Length and correctness in LLMs
Jinyan Su
Jennifer Healey
Preslav Nakov
Claire Cardie
LRM
144
0
0
30 Apr 2025
DISC: Dynamic Decomposition Improves LLM Inference Scaling
Jonathan Light
Wei Cheng
Wu Yue
Masafumi Oyamada
Mengdi Wang
Santiago Paternain
Haifeng Chen
ReLM
LRM
61
2
0
23 Feb 2025
Make Every Penny Count: Difficulty-Adaptive Self-Consistency for Cost-Efficient Reasoning
Xinglin Wang
Shaoxiong Feng
Yiwei Li
Peiwen Yuan
Y. Zhang
Boyuan Pan
Heda Wang
Yao Hu
Kan Li
LRM
40
17
0
24 Aug 2024
1