9
0

Exploiting Mixture-of-Experts Redundancy Unlocks Multimodal Generative Abilities

Abstract

In this work, we undertake the challenge of augmenting the existing generative capabilities of pre-trained text-only large language models (LLMs) with multi-modal generation capability while satisfying two core constraints: C1 preserving the preservation of original language generative capabilities with negligible performance degradation, and C2 adhering to a small parameter budget to learn the new modality, ensuring scalability and efficiency. In contrast to current approaches that add dedicated modules, thereby significantly increasing the parameter count, we propose a method that leverages the underutilized capacity inherent in deep models. Specifically, we exploit the parameter redundancy within Mixture-of-Experts (MoEs) as a source of additional capacity for learning a new modality, enabling better parameter efficiency (C1). Moreover, we preserve the original language generation capabilities by applying low-rank adaptation exclusively to the tokens of the new modality (C2). Furthermore, we introduce a novel parameter initialization scheme based on the Gromov-Wasserstein distance to improve convergence and training stability. Through an extensive analysis of the routing mechanism, we uncover the emergence of modality-specific pathways and decreased redundancy within the experts that can efficiently unlock multi-modal generative capabilities. Overall, our method can be seamlessly applied to a wide range of contemporary LLMs, providing a new pathway for transitioning from uni-modal to multi-modal architectures.

View on arXiv
@article{dutt2025_2503.22517,
  title={ Exploiting Mixture-of-Experts Redundancy Unlocks Multimodal Generative Abilities },
  author={ Raman Dutt and Harleen Hanspal and Guoxuan Xia and Petru-Daniel Tudosiu and Alexander Black and Yongxin Yang and Steven McDonagh and Sarah Parisot },
  journal={arXiv preprint arXiv:2503.22517},
  year={ 2025 }
}
Comments on this paper