ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.02154
80
0

AugFL: Augmenting Federated Learning with Pretrained Models

4 March 2025
Sheng Yue
Zerui Qin
Yongheng Deng
Ju Ren
Yaoxue Zhang
Junshan Zhang
    FedML
ArXivPDFHTML
Abstract

Federated Learning (FL) has garnered widespread interest in recent years. However, owing to strict privacy policies or limited storage capacities of training participants such as IoT devices, its effective deployment is often impeded by the scarcity of training data in practical decentralized learning environments. In this paper, we study enhancing FL with the aid of (large) pre-trained models (PMs), that encapsulate wealthy general/domain-agnostic knowledge, to alleviate the data requirement in conducting FL from scratch. Specifically, we consider a networked FL system formed by a central server and distributed clients. First, we formulate the PM-aided personalized FL as a regularization-based federated meta-learning problem, where clients join forces to learn a meta-model with knowledge transferred from a private PM stored at the server. Then, we develop an inexact-ADMM-based algorithm, AugFL, to optimize the problem with no need to expose the PM or incur additional computational costs to local clients. Further, we establish theoretical guarantees for AugFL in terms of communication complexity, adaptation performance, and the benefit of knowledge transfer in general non-convex cases. Extensive experiments corroborate the efficacy and superiority of AugFL over existing baselines.

View on arXiv
@article{yue2025_2503.02154,
  title={ AugFL: Augmenting Federated Learning with Pretrained Models },
  author={ Sheng Yue and Zerui Qin and Yongheng Deng and Ju Ren and Yaoxue Zhang and Junshan Zhang },
  journal={arXiv preprint arXiv:2503.02154},
  year={ 2025 }
}
Comments on this paper