11
0

ToVE: Efficient Vision-Language Learning via Knowledge Transfer from Vision Experts

Abstract

Vision-language (VL) learning requires extensive visual perception capabilities, such as fine-grained object recognition and spatial perception. Recent works typically rely on training huge models on massive datasets to develop these capabilities. As a more efficient alternative, this paper proposes a new framework that Transfers the knowledge from a hub of Vision Experts (ToVE) for efficient VL learning, leveraging pre-trained vision expert models to promote visual perception capability. Specifically, building on a frozen CLIP encoder that provides vision tokens for image-conditioned language generation, ToVE introduces a hub of multiple vision experts and a token-aware gating network that dynamically routes expert knowledge to vision tokens. In the transfer phase, we propose a "residual knowledge transfer" strategy, which not only preserves the generalizability of the vision tokens but also allows detachment of low-contributing experts to improve inference efficiency. Further, we explore to merge these expert knowledge to a single CLIP encoder, creating a knowledge-merged CLIP that produces more informative vision tokens without expert inference during deployment. Experiment results across various VL tasks demonstrate that the proposed ToVE achieves competitive performance with two orders of magnitude fewer training data.

View on arXiv
@article{wu2025_2504.00691,
  title={ ToVE: Efficient Vision-Language Learning via Knowledge Transfer from Vision Experts },
  author={ Yuanchen Wu and Junlong Du and Ke Yan and Shouhong Ding and Xiaoqiang Li },
  journal={arXiv preprint arXiv:2504.00691},
  year={ 2025 }
}
Comments on this paper