425
v1v2v3v4 (latest)

Soft Reasoning: Navigating Solution Spaces in Large Language Models through Controlled Embedding Exploration

Main:9 Pages
9 Figures
Bibliography:4 Pages
15 Tables
Appendix:8 Pages
Abstract

Large Language Models (LLMs) struggle with complex reasoning due to limited diversity and inefficient search. We propose Soft Reasoning, an embedding-based search framework that optimises the embedding of the first token to guide generation. It combines (1) embedding perturbation for controlled exploration and (2) Bayesian optimisation to refine embeddings via a verifier-guided objective, balancing exploration and exploitation. This approach improves reasoning accuracy and coherence while avoiding reliance on heuristic search. Experiments demonstrate superior correctness with minimal computation, making it a scalable, model-agnostic solution. The code is released atthis https URL.

View on arXiv
Comments on this paper