ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.13288
36
2

ϕϕϕ-Decoding: Adaptive Foresight Sampling for Balanced Inference-Time Exploration and Exploitation

17 March 2025
Fangzhi Xu
Hang Yan
Chang Ma
Haiteng Zhao
Jun Liu
Qika Lin
Zhiyong Wu
ArXivPDFHTML
Abstract

Inference-time optimization scales computation to derive deliberate reasoning steps for effective performance. While previous search-based strategies address the short-sightedness of auto-regressive generation, the vast search space leads to excessive exploration and insufficient exploitation. To strike an efficient balance to derive the optimal step, we frame the decoding strategy as foresight sampling, leveraging simulated future steps to obtain globally optimal step estimation. Built on it, we propose a novel decoding strategy, named ϕ\phiϕ-Decoding. To provide a precise and expressive estimation of step value, ϕ\phiϕ-Decoding approximates two distributions via foresight and clustering. Sampling from the joint distribution, the optimal steps can be selected for exploitation. To support adaptive computation allocation, we propose in-width and in-depth pruning strategies, featuring a light-weight solution to achieve inference efficiency. Extensive experiments across seven benchmarks show ϕ\phiϕ-Decoding outperforms strong baselines in both performance and efficiency. Additional analysis demonstrates its generalization across various LLMs and scalability across a wide range of computing budgets. The code will be released atthis https URL, and the open-source PyPI package is coming soon.

View on arXiv
@article{xu2025_2503.13288,
  title={ $ϕ$-Decoding: Adaptive Foresight Sampling for Balanced Inference-Time Exploration and Exploitation },
  author={ Fangzhi Xu and Hang Yan and Chang Ma and Haiteng Zhao and Jun Liu and Qika Lin and Zhiyong Wu },
  journal={arXiv preprint arXiv:2503.13288},
  year={ 2025 }
}
Comments on this paper