ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.13108
38
0

Lifting the Veil on Visual Information Flow in MLLMs: Unlocking Pathways to Faster Inference

17 March 2025
Hao Yin
Guangzong Si
Zilei Wang
ArXivPDFHTML
Abstract

Multimodal large language models (MLLMs) improve performance on vision-language tasks by integrating visual features from pre-trained vision encoders into large language models (LLMs). However, how MLLMs process and utilize visual information remains unclear. In this paper, a shift in the dominant flow of visual information is uncovered: (1) in shallow layers, strong interactions are observed between image tokens and instruction tokens, where most visual information is injected into instruction tokens to form cross-modal semantic representations; (2) in deeper layers, image tokens primarily interact with each other, aggregating the remaining visual information to optimize semantic representations within visual modality. Based on these insights, we propose Hierarchical Modality-Aware Pruning (HiMAP), a plug-and-play inference acceleration method that dynamically prunes image tokens at specific layers, reducing computational costs by approximately 65% without sacrificing performance. Our findings offer a new understanding of visual information processing in MLLMs and provide a state-of-the-art solution for efficient inference.

View on arXiv
@article{yin2025_2503.13108,
  title={ Lifting the Veil on Visual Information Flow in MLLMs: Unlocking Pathways to Faster Inference },
  author={ Hao Yin and Guangzong Si and Zilei Wang },
  journal={arXiv preprint arXiv:2503.13108},
  year={ 2025 }
}
Comments on this paper