48
0

CASP: Compression of Large Multimodal Models Based on Attention Sparsity

Abstract

In this work, we propose an extreme compression technique for Large Multimodal Models (LMMs). While previous studies have explored quantization as an efficient post-training compression method for Large Language Models (LLMs), low-bit compression for multimodal models remains under-explored. The redundant nature of inputs in multimodal models results in a highly sparse attention matrix. We theoretically and experimentally demonstrate that the attention matrix's sparsity bounds the compression error of the Query and Key weight matrices. Based on this, we introduce CASP, a model compression technique for LMMs. Our approach performs a data-aware low-rank decomposition on the Query and Key weight matrix, followed by quantization across all layers based on an optimal bit allocation process. CASP is compatible with any quantization technique and enhances state-of-the-art 2-bit quantization methods (AQLM and QuIP#) by an average of 21% on image- and video-language benchmarks.

View on arXiv
@article{gholami2025_2503.05936,
  title={ CASP: Compression of Large Multimodal Models Based on Attention Sparsity },
  author={ Mohsen Gholami and Mohammad Akbari and Kevin Cannons and Yong Zhang },
  journal={arXiv preprint arXiv:2503.05936},
  year={ 2025 }
}
Comments on this paper