ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.08966
34
0

PACT: Pruning and Clustering-Based Token Reduction for Faster Visual Language Models

11 April 2025
M. Dhouib
Davide Buscaldi
Sonia Vanier
A. Shabou
    VLM
ArXivPDFHTML
Abstract

Visual Language Models require substantial computational resources for inference due to the additional input tokens needed to represent visual information. However, these visual tokens often contain redundant and unimportant information, resulting in an unnecessarily high number of tokens. To address this, we introduce PACT, a method that reduces inference time and memory usage by pruning irrelevant tokens and merging visually redundant ones at an early layer of the language model. Our approach uses a novel importance metric to identify unimportant tokens without relying on attention scores, making it compatible with FlashAttention. We also propose a novel clustering algorithm, called Distance Bounded Density Peak Clustering, which efficiently clusters visual tokens while constraining the distances between elements within a cluster by a predefined threshold. We demonstrate the effectiveness of PACT through extensive experiments.

View on arXiv
@article{dhouib2025_2504.08966,
  title={ PACT: Pruning and Clustering-Based Token Reduction for Faster Visual Language Models },
  author={ Mohamed Dhouib and Davide Buscaldi and Sonia Vanier and Aymen Shabou },
  journal={arXiv preprint arXiv:2504.08966},
  year={ 2025 }
}
Comments on this paper