The tool-using capability of large language models (LLMs) enables them to access up-to-date external information and handle complex tasks. Current approaches to enhancing this capability primarily rely on distilling advanced models by data synthesis. However, this method incurs significant costs associated with advanced model usage and often results in data compatibility issues, led by the high discrepancy in the knowledge scope between the advanced model and the target model. To address these challenges, we propose ToolACE-DEV, a self-improving framework for tool learning. First, we decompose the tool-learning objective into sub-tasks that enhance basic tool-making and tool-using abilities. Then, we introduce a self-evolving paradigm that allows lightweight models to self-improve, reducing reliance on advanced LLMs. Extensive experiments validate the effectiveness of our approach across models of varying scales and architectures.
View on arXiv@article{huang2025_2505.07512, title={ ToolACE-DEV: Self-Improving Tool Learning via Decomposition and EVolution }, author={ Xu Huang and Weiwen Liu and Xingshan Zeng and Yuefeng Huang and Xinlong Hao and Yuxian Wang and Yirong Zeng and Chuhan Wu and Yasheng Wang and Ruiming Tang and Defu Lian }, journal={arXiv preprint arXiv:2505.07512}, year={ 2025 } }