ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.12016
47
0

A Survey on Federated Fine-tuning of Large Language Models

15 March 2025
Yebo Wu
Chunlin Tian
Jingguang Li
He Sun
Kahou Tam
Li Li
Chengzhong Xu
    FedML
ArXivPDFHTML
Abstract

Large Language Models (LLMs) have achieved remarkable success across a wide range of tasks, with fine-tuning playing a pivotal role in adapting them to specific downstream applications. Federated Learning (FL) offers a promising approach that enables collaborative model adaptation while ensuring data privacy, i.e., FedLLM. In this survey, we provide a systematic and thorough review of the integration of LLMs with FL. Specifically, we first trace the historical evolution of both LLMs and FL, while summarizing relevant prior surveys. We then present an in-depth analysis of the fundamental challenges encountered in deploying FedLLM. Following this, we conduct an extensive study of existing parameter-efficient fine-tuning (PEFT) methods and explore their applicability in FL. Furthermore, we introduce a comprehensive evaluation benchmark to rigorously assess FedLLM performance and discuss its diverse real-world applications across multiple domains. Finally, we identify critical open challenges and outline promising research directions to drive future advancements in FedLLM. We maintain an active \href{this https URL}{GitHub repository} tracking cutting-edge advancements. This survey serves as a foundational resource for researchers and practitioners, offering insights into the evolving landscape of federated fine-tuning for LLMs while guiding future innovations in privacy-preserving AI.

View on arXiv
@article{wu2025_2503.12016,
  title={ A Survey on Federated Fine-tuning of Large Language Models },
  author={ Yebo Wu and Chunlin Tian and Jingguang Li and He Sun and Kahou Tam and Li Li and Chengzhong Xu },
  journal={arXiv preprint arXiv:2503.12016},
  year={ 2025 }
}
Comments on this paper