16
0

Token Sequence Compression for Efficient Multimodal Computing

Abstract

The exponential growth of Large Multimodal Models (LMMs) has driven advancements in cross-modal reasoning but at significant computational costs. In this work, we focus on visual language models. We highlight the redundancy and inefficiency in current vision encoders, and seek to construct an adaptive compression method for multimodal data. In this work, we characterize a panoply of visual token selection and merging approaches through both benchmarking and qualitative analysis. In particular, we demonstrate that simple cluster-level token aggregation outperforms prior state-of-the-art works in token selection and merging, including merging at the vision encoder level and attention-based approaches. We underline the redundancy in current vision encoders, and shed light on several puzzling trends regarding principles of visual token selection through cross-modal attention visualizations. This work is a first effort towards more effective encoding and processing of high-dimensional data, and paves the way for more scalable and sustainable multimodal systems.

View on arXiv
@article{omri2025_2504.17892,
  title={ Token Sequence Compression for Efficient Multimodal Computing },
  author={ Yasmine Omri and Parth Shroff and Thierry Tambe },
  journal={arXiv preprint arXiv:2504.17892},
  year={ 2025 }
}
Comments on this paper