86

μμ-MoE: Test-Time Pruning as Micro-Grained Mixture-of-Experts

Main:4 Pages
4 Figures
Bibliography:3 Pages
5 Tables
Appendix:3 Pages
Abstract

To tackle the huge computational demand of large foundation models, activation-aware compression techniques without retraining have been introduced. However, since these rely on calibration data, domain shift may arise for unknown downstream tasks. With a computationally efficient calibration, activation-aware pruning can be executed for every prompt adaptively, yet achieving reduced complexity at inference. We formulate it as a mixture of micro-experts, called μ\mu-MoE. Several experiments demonstrate that μ\mu-MoE can dynamically adapt to task/prompt-dependent structured sparsity on the fly.

View on arXiv
Comments on this paper