ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.17280
  4. Cited By
Flexible and Effective Mixing of Large Language Models into a Mixture of
  Domain Experts

Flexible and Effective Mixing of Large Language Models into a Mixture of Domain Experts

30 August 2024
Rhui Dih Lee
L. Wynter
R. Ganti
    MoE
ArXivPDFHTML

Papers citing "Flexible and Effective Mixing of Large Language Models into a Mixture of Domain Experts"

1 / 1 papers shown
Title
Self-MoE: Towards Compositional Large Language Models with
  Self-Specialized Experts
Self-MoE: Towards Compositional Large Language Models with Self-Specialized Experts
Junmo Kang
Leonid Karlinsky
Hongyin Luo
Zhen Wang
Jacob A. Hansen
James Glass
David D. Cox
Rameswar Panda
Rogerio Feris
Alan Ritter
MoMe
MoE
26
3
0
17 Jun 2024
1