ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.10114
28
3

Mixture of Experts Made Personalized: Federated Prompt Learning for Vision-Language Models

14 October 2024
Jun Luo
C. L. P. Chen
Shandong Wu
    FedML
    VLM
    MoE
ArXivPDFHTML
Abstract

Federated prompt learning benefits federated learning with CLIP-like Vision-Language Model's (VLM's) robust representation learning ability through prompt learning. However, current federated prompt learning methods are habitually restricted to the traditional FL paradigm, where the participating clients are generally only allowed to download a single globally aggregated model from the server. While justifiable for training full-sized models under federated settings, in this work, we argue that this paradigm is ill-suited for lightweight prompts. By facilitating the clients to download multiple pre-aggregated prompts as fixed non-local experts, we propose Personalized Federated Mixture of Adaptive Prompts (pFedMoAP), a novel FL framework that personalizes the prompt learning process through the lens of Mixture of Experts (MoE). pFedMoAP implements a local attention-based gating network that learns to generate enhanced text features for better alignment with local image data, benefiting from both local and downloaded non-local adaptive prompt experts. Extensive experiments on 9 datasets under various federated settings demonstrate the efficacy of the proposed pFedMoAP algorithm. The code is available atthis https URL.

View on arXiv
@article{luo2025_2410.10114,
  title={ Mixture of Experts Made Personalized: Federated Prompt Learning for Vision-Language Models },
  author={ Jun Luo and Chen Chen and Shandong Wu },
  journal={arXiv preprint arXiv:2410.10114},
  year={ 2025 }
}
Comments on this paper