ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.12226
53
2

Research on Large Language Model Cross-Cloud Privacy Protection and Collaborative Training based on Federated Learning

15 March 2025
Ze Yang
Yihong Jin
Yihan Zhang
Juntian Liu
Xinhe Xu
    FedML
ArXivPDFHTML
Abstract

The fast development of large language models (LLMs) and popularization of cloud computing have led to increasing concerns on privacy safeguarding and data security of cross-cloud model deployment and training as the key challenges. We present a new framework for addressing these issues along with enabling privacy preserving collaboration on training between distributed clouds based on federated learning. Our mechanism encompasses cutting-edge cryptographic primitives, dynamic model aggregation techniques, and cross-cloud data harmonization solutions to enhance security, efficiency, and scalability to the traditional federated learning paradigm. Furthermore, we proposed a hybrid aggregation scheme to mitigate the threat of Data Leakage and to optimize the aggregation of model updates, thus achieving substantial enhancement on the model effectiveness and stability. Experimental results demonstrate that the training efficiency, privacy protection, and model accuracy of the proposed model compare favorably to those of the traditional federated learning method.

View on arXiv
@article{yang2025_2503.12226,
  title={ Research on Large Language Model Cross-Cloud Privacy Protection and Collaborative Training based on Federated Learning },
  author={ Ze Yang and Yihong Jin and Yihan Zhang and Juntian Liu and Xinhe Xu },
  journal={arXiv preprint arXiv:2503.12226},
  year={ 2025 }
}
Comments on this paper