ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2601.02967
  4. Cited By
MoE Adapter for Large Audio Language Models: Sparsity, Disentanglement, and Gradient-Conflict-Free
v1v2 (latest)

MoE Adapter for Large Audio Language Models: Sparsity, Disentanglement, and Gradient-Conflict-Free

6 January 2026
Yishu Lei
Shuwei He
Jing Hu
Dan Zhang
Xianlong Luo
Danxiang Zhu
Shikun Feng
Rui Liu
Jingzhou He
Yu Sun
Hua Wu
Haifeng Wang
    AuLLMMoE
ArXiv (abs)PDFHTML

Papers citing "MoE Adapter for Large Audio Language Models: Sparsity, Disentanglement, and Gradient-Conflict-Free"

0 / 0 papers shown

No papers found