ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.12322
26
0

A Strategic Coordination Framework of Small LLMs Matches Large LLMs in Data Synthesis

11 April 2025
Xin Gao
Qizhi Pei
Zinan Tang
Y. Li
Honglin Lin
Jiang Wu
C. He
Lijun Wu
    SyDa
ArXivPDFHTML
Abstract

While data synthesis and distillation are promising strategies to enhance small language models, current approaches heavily rely on Large Language Models (LLMs), which suffer from high computational costs, environmental inefficiency, and potential biases inherited from monolithic architectures. In contrast, smaller LLMs are more accessible and sustainable, but their individual capabilities often fall short in generating high-quality, diverse, and reliable data. Inspired by collaborative human processes (e.g., peer review), we propose a multiple small LLMs involved framework, GRA, that aggregates specialized roles across small LLMs to iterative refinement and quality control typically achieved by a single large LLM. In this collaborative framework, multiple small LLMs assume distinct roles-Generator, Reviewer, and Adjudicator-to simulate a peer-review-inspired data synthesis pipeline. The Generator proposes initial data samples, the Reviewer critiques their quality and diversity, and the Adjudicator resolves conflicts to finalize the output. By decomposing the synthesis process into specialized sub-tasks, collaborative small LLMs can achieve data-level parity with large LLM-based distillation. Through experiments across multiple benchmarks, we demonstrate that GRA-produced data matches or exceeds the quality of single large LLM outputs, e.g., Qwen-2.5-72B-Instruct. Our results challenge the necessity of monolithic large models for high-quality data synthesis, advocating instead for strategic coordination of smaller agents. Our datasets, models, and code are publicly available atthis https URL.

View on arXiv
@article{gao2025_2504.12322,
  title={ A Strategic Coordination Framework of Small LLMs Matches Large LLMs in Data Synthesis },
  author={ Xin Gao and Qizhi Pei and Zinan Tang and Yu Li and Honglin Lin and Jiang Wu and Lijun Wu and Conghui He },
  journal={arXiv preprint arXiv:2504.12322},
  year={ 2025 }
}
Comments on this paper