20
1

Can Large Language Models Invent Algorithms to Improve Themselves?

Abstract

Large Language Models (LLMs) have shown remarkable performance improvements and are rapidly gaining adoption in industry. However, the methods for improving LLMs are still designed by humans, which restricts the invention of new model-improving algorithms to human expertise and imagination. To address this, we propose the Self-Developing framework, which enables LLMs to autonomously generate and learn model-improvement algorithms. In this framework, the seed model generates, applies, and learns model-improving algorithms, continuously improving both the seed model and the algorithms themselves. Among model-improving strategies, we focus on model merging algorithms. In mathematical reasoning tasks, Self-Developing discovers novel merging strategies and outperforms human-designed methods. On GSM8k, the discovered algorithms improve the seed model by 6% and surpass human-designed methods by 4.3%. Moreover, they exhibit strong transferability, achieving a 7.4% performance gain on out-of-domain models. These results suggest that LLMs can autonomously develop effective model-improvement techniques beyond human intuition.

View on arXiv
@article{ishibashi2025_2410.15639,
  title={ Can Large Language Models Invent Algorithms to Improve Themselves? },
  author={ Yoichi Ishibashi and Taro Yano and Masafumi Oyamada },
  journal={arXiv preprint arXiv:2410.15639},
  year={ 2025 }
}
Comments on this paper