STORM-BORN: A Challenging Mathematical Derivations Dataset Curated via a Human-in-the-Loop Multi-Agent Framework
- OffRL

High-quality math datasets are crucial for advancing the reasoning abilities of large language models (LLMs). However, existing datasets often suffer from three key issues: outdated and insufficient challenging content, neglecting human-like reasoning, and limited reliability due to single-LLM generation. To address these, we introduce , an ultra-challenging dataset of mathematical derivations sourced from cutting-edge academic papers, which includes dense human-like approximations and heuristic cues. To ensure the reliability and quality, we propose a novel human-in-the-loop, multi-agent data generation framework, integrating reasoning-dense filters, multi-agent collaboration, and human mathematicians' evaluations. We curated a set of 2,000 synthetic samples and deliberately selected the 100 most difficult problems. Even most advanced models like GPT-o1 solved fewer than of them. Fine-tuning on STORM-BORN boosts accuracy by (LLaMA3-8B) and (Qwen2.5-7B). As AI approaches mathematician-level reasoning, STORM-BORN provides both a high-difficulty benchmark and a human-like reasoning training resource. Our code and dataset are publicly available at this https URL.
View on arXiv@article{liu2025_2506.01531, title={ STORM-BORN: A Challenging Mathematical Derivations Dataset Curated via a Human-in-the-Loop Multi-Agent Framework }, author={ Wenhao Liu and Zhenyi Lu and Xinyu Hu and Jierui Zhang and Dailin Li and Jiacheng Cen and Huilin Cao and Haiteng Wang and Yuhan Li and Kun Xie and Dandan Li and Pei Zhang and Chengbo Zhang and Yuxiang Ren and Xiaohong Huang and Yan Ma }, journal={arXiv preprint arXiv:2506.01531}, year={ 2025 } }