Code-Driven Inductive Synthesis: Enhancing Reasoning Abilities of Large Language Models with Sequences

Large language models make remarkable progress in reasoning capabilities. Existing works focus mainly on deductive reasoning tasks (e.g., code and math), while another type of reasoning mode that better aligns with human learning, inductive reasoning, is not well studied. We attribute the reason to the fact that obtaining high-quality process supervision data is challenging for inductive reasoning. Towards this end, we novelly employ number sequences as the source of inductive reasoning data. We package sequences into algorithmic problems to find the general term of each sequence through a code solution. In this way, we can verify whether the code solution holds for any term in the current sequence, and inject case-based supervision signals by using code unit tests. We build a sequence synthetic data pipeline and form a training dataset CodeSeq. Experimental results show that the models tuned with CodeSeq improve on both code and comprehensive reasoning benchmarks.
View on arXiv@article{chen2025_2503.13109, title={ Code-Driven Inductive Synthesis: Enhancing Reasoning Abilities of Large Language Models with Sequences }, author={ Kedi Chen and Zhikai Lei and Fan Zhang and Yinqi Zhang and Qin Chen and Jie Zhou and Liang He and Qipeng Guo and Kai Chen and Wei Zhang }, journal={arXiv preprint arXiv:2503.13109}, year={ 2025 } }