One Student Knows All Experts Know: From Sparse to Dense

Human education system trains one student by multiple experts. Mixture-of-experts (MoE) is a powerful sparse architecture including multiple experts. However, sparse MoE model is easy to overfit, hard to deploy, and not hardware-friendly for practitioners. In this work, inspired by the human education model, we propose a novel task, knowledge integration, to obtain a dense student model (OneS) as knowledgeable as one sparse MoE. We investigate this task by proposing a general training framework including knowledge gathering and knowledge distillation. Specifically, to gather key knowledge from different pre-trained experts, we first investigate four different possible knowledge gathering methods, \ie summation, averaging, Top-K Knowledge Gathering (Top-KG), and Singular Value Decomposition Knowledge Gathering (SVD-KG) proposed in this paper. We then refine the dense student model by knowledge distillation to offset the noise from gathering. On ImageNet, our OneS preserves benefits from MoE and achieves top-1 accuracy ImageNet with only M parameters. On four natural language processing datasets, OneS obtains MoE benefits and outperforms the best baseline by using the same architecture and training data. In addition, compared with the MoE counterpart, OneS can achieve inference speedup due to less computation and hardware-friendly architecture.
View on arXiv