Data Diversification Methods In Alignment Enhance Math Performance In LLMs

While recent advances in preference learning have enhanced alignment in human feedback, mathematical reasoning remains a persistent challenge. We investigate how data diversification strategies in preference optimization can improve the mathematical reasoning abilities of large language models (LLMs). We evaluate three common data generation methods: temperature sampling, Chain-of-Thought prompting, and Monte Carlo Tree Search (MCTS), and introduce Diversified-ThinkSolve (DTS), a novel structured approach that systematically decomposes problems into diverse reasoning paths. Our results show that with strategically diversified preference data, models can substantially improve mathematical reasoning performance, with the best approach yielding gains of 7.1% on GSM8K and 4.2% on MATH over the base model. Despite its strong performance, DTS incurs only a marginal computational overhead (1.03x) compared to the baseline, while MCTS is nearly five times more costly with lower returns. These findings demonstrate that structured exploration of diverse problem-solving methods creates more effective preference data for mathematical alignment than traditional approaches.
View on arXiv@article{dokmeci2025_2507.02173, title={ Data Diversification Methods In Alignment Enhance Math Performance In LLMs }, author={ Berkan Dokmeci and Qingyang Wu and Ben Athiwaratkun and Ce Zhang and Shuaiwen Leon Song and James Zou }, journal={arXiv preprint arXiv:2507.02173}, year={ 2025 } }