ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.21227
54
0

LLaVA-CMoE: Towards Continual Mixture of Experts for Large Vision-Language Models

27 March 2025
Hengyuan Zhao
Ziqin Wang
Qixin Sun
Kaiyou Song
Yilin Li
Xiaolin Hu
Qingpei Guo
Si Liu
    KELM
    CLL
    MoE
ArXivPDFHTML
Abstract

Although applying Mixture of Experts to large language models for learning new tasks is widely regarded as an effective strategy for continuous learning, there still remain two major challenges: (1) As the number of tasks grows, simple parameter expansion strategies can lead to excessively large models. (2) Modifying the parameters of the existing router results in the erosion of previously acquired knowledge. In this paper, we present an innovative framework named LLaVA-CMoE, which is a continuous Mixture of Experts (MoE) architecture without any replay data. Specifically, we have developed a method called Probe-Guided Knowledge Extension (PGKE), which employs probe experts to assess whether additional knowledge is required for a specific layer. This approach enables the model to adaptively expand its network parameters based on task distribution, thereby significantly improving the efficiency of parameter expansion. Additionally, we introduce a hierarchical routing algorithm called Probabilistic Task Locator (PTL), where high-level routing captures inter-task information and low-level routing focuses on intra-task details, ensuring that new task experts do not interfere with existing ones. Our experiments shows that our efficient architecture has substantially improved model performance on the Coin benchmark while maintaining a reasonable parameter count.

View on arXiv
@article{zhao2025_2503.21227,
  title={ LLaVA-CMoE: Towards Continual Mixture of Experts for Large Vision-Language Models },
  author={ Hengyuan Zhao and Ziqin Wang and Qixin Sun and Kaiyou Song and Yilin Li and Xiaolin Hu and Qingpei Guo and Si Liu },
  journal={arXiv preprint arXiv:2503.21227},
  year={ 2025 }
}
Comments on this paper