ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.02279
5
0

LaCo: Efficient Layer-wise Compression of Visual Tokens for Multimodal Large Language Models

3 July 2025
Juntao Liu
Liqiang Niu
Wenchao Chen
Jie Zhou
Fandong Meng
ArXiv (abs)PDFHTML
Main:8 Pages
5 Figures
Bibliography:3 Pages
9 Tables
Appendix:4 Pages
Abstract

Existing visual token compression methods for Multimodal Large Language Models (MLLMs) predominantly operate as post-encoder modules, limiting their potential for efficiency gains. To address this limitation, we propose LaCo (Layer-wise Visual Token Compression), a novel framework that enables effective token compression within the intermediate layers of the vision encoder. LaCo introduces two core components: 1) a layer-wise pixel-shuffle mechanism that systematically merges adjacent tokens through space-to-channel transformations, and 2) a residual learning architecture with non-parametric shortcuts that preserves critical visual information during compression. Extensive experiments indicate that our LaCo outperforms all existing methods when compressing tokens in the intermediate layers of the vision encoder, demonstrating superior effectiveness. In addition, compared to external compression, our method improves training efficiency beyond 20% and inference throughput over 15% while maintaining strong performance.

View on arXiv
@article{liu2025_2507.02279,
  title={ LaCo: Efficient Layer-wise Compression of Visual Tokens for Multimodal Large Language Models },
  author={ Juntao Liu and Liqiang Niu and Wenchao Chen and Jie Zhou and Fandong Meng },
  journal={arXiv preprint arXiv:2507.02279},
  year={ 2025 }
}
Comments on this paper