Pre-trained large language models (LLMs) exhibit powerful capabilities for generating natural text. Evolutionary algorithms (EAs) can discover diverse solutions to complex real-world problems. Motivated by the common collective and directionality of text generation and evolution, this paper first illustrates the conceptual parallels between LLMs and EAs at a micro level, which includes multiple one-to-one key characteristics: token representation and individual representation, position encoding and fitness shaping, position embedding and selection, Transformers block and reproduction, and model training and parameter adaptation. These parallels highlight potential opportunities for technical advancements in both LLMs and EAs. Subsequently, we analyze existing interdisciplinary research from a macro perspective to uncover critical challenges, with a particular focus on evolutionary fine-tuning and LLM-enhanced EAs. These analyses not only provide insights into the evolutionary mechanisms behind LLMs but also offer potential directions for enhancing the capabilities of artificial agents.
View on arXiv@article{wang2025_2401.10510, title={ When Large Language Models Meet Evolutionary Algorithms: Potential Enhancements and Challenges }, author={ Chao Wang and Jiaxuan Zhao and Licheng Jiao and Lingling Li and Fang Liu and Shuyuan Yang }, journal={arXiv preprint arXiv:2401.10510}, year={ 2025 } }