98
3

Synthesizing Post-Training Data for LLMs through Multi-Agent Simulation

Abstract

Post-training is essential for enabling large language models (LLMs) to follow human instructions. However, its effectiveness depends on high-quality instruction data, which is challenging to obtain in the real world due to privacy concerns, data scarcity, and high annotation costs. To fill this gap, inspired by the recent success of using LLMs to simulate human society, we propose MATRIX, a multi-agent simulator that automatically generates diverse text-based scenarios, capturing a wide range of real-world human needs in a realistic and scalable manner. Leveraging these outputs, we introduce a novel scenario-driven instruction generator MATRIX-Gen for controllable and highly realistic data synthesis. Extensive experiments demonstrate that our framework effectively generates both general and domain-specific data. On AlpacaEval 2 and Arena-Hard benchmarks, Llama-3-8B-Base, post-trained on datasets synthesized by MATRIX-Gen with just 20K instruction-response pairs, outperforms Meta's Llama-3-8B-Instruct model, which was trained on over 10M pairs.

View on arXiv
@article{tang2025_2410.14251,
  title={ Synthesizing Post-Training Data for LLMs through Multi-Agent Simulation },
  author={ Shuo Tang and Xianghe Pang and Zexi Liu and Bohan Tang and Rui Ye and Tian Jin and Xiaowen Dong and Yanfeng Wang and Siheng Chen },
  journal={arXiv preprint arXiv:2410.14251},
  year={ 2025 }
}
Comments on this paper