Recent Advances in Federated Learning Driven Large Language Models: A Survey on Architecture, Performance, and Security

Federated Learning (FL) offers a promising paradigm for training Large Language Models (LLMs) in a decentralized manner while preserving data privacy and minimizing communication overhead. This survey examines recent advancements in FL-driven LLMs, with a particular emphasis on architectural designs, performance optimization, and security concerns, including the emerging area of machine unlearning. In this context, machine unlearning refers to the systematic removal of specific data contributions from trained models to comply with privacy regulations such as the Right to be Forgotten. We review a range of strategies enabling unlearning in federated LLMs, including perturbation-based methods, model decomposition, and incremental retraining, while evaluating their trade-offs in terms of efficiency, privacy guarantees, and model utility. Through selected case studies and empirical evaluations, we analyze how these methods perform in practical FL scenarios. This survey identifies critical research directions toward developing secure, adaptable, and high-performing federated LLM systems for real-world deployment.
View on arXiv@article{qu2025_2406.09831, title={ Recent Advances in Federated Learning Driven Large Language Models: A Survey on Architecture, Performance, and Security }, author={ Youyang Qu and Ming Liu and Tianqing Zhu and Longxiang Gao and Shui Yu and Wanlei Zhou }, journal={arXiv preprint arXiv:2406.09831}, year={ 2025 } }