ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.18873
64
1

Multi-LLM Collaborative Search for Complex Problem Solving

26 February 2025
Sen Yang
Yafu Li
Wai Lam
Yu Cheng
    LLMAG
    LRM
ArXivPDFHTML
Abstract

Large language models (LLMs) often struggle with complex reasoning tasks due to their limitations in addressing the vast reasoning space and inherent ambiguities of natural language. We propose the Mixture-of-Search-Agents (MoSA) paradigm, a novel approach leveraging the collective expertise of multiple LLMs to enhance search-based reasoning. MoSA integrates diverse reasoning pathways by combining independent exploration with iterative refinement among LLMs, mitigating the limitations of single-model approaches. Using Monte Carlo Tree Search (MCTS) as a backbone, MoSA enables multiple agents to propose and aggregate reasoning steps, resulting in improved accuracy. Our comprehensive evaluation across four reasoning benchmarks demonstrates MoSA's consistent performance improvements over single-agent and other multi-agent baselines, particularly in complex mathematical and commonsense reasoning tasks.

View on arXiv
@article{yang2025_2502.18873,
  title={ Multi-LLM Collaborative Search for Complex Problem Solving },
  author={ Sen Yang and Yafu Li and Wai Lam and Yu Cheng },
  journal={arXiv preprint arXiv:2502.18873},
  year={ 2025 }
}
Comments on this paper