22
0

Looking Beyond Language Priors: Enhancing Visual Comprehension and Attention in Multimodal Models

Abstract

Achieving deep alignment between vision and language remains a central challenge for Multimodal Large Language Models (MLLMs). These models often fail to fully leverage visual input, defaulting to strong language priors. Our approach first provides insights into how MLLMs internally build visual understanding of image regions and then introduces techniques to amplify this capability. Specifically, we explore techniques designed both to deepen the model's understanding of visual content and to ensure that these visual insights actively guide language generation. We demonstrate the superior multimodal understanding of our resultant model through a detailed upstream analysis quantifying its ability to predict visually-dependent tokens as well as 10 pt boost on visually challenging tasks.

View on arXiv
@article{ghatkesar2025_2505.05626,
  title={ Looking Beyond Language Priors: Enhancing Visual Comprehension and Attention in Multimodal Models },
  author={ Aarti Ghatkesar and Uddeshya Upadhyay and Ganesh Venkatesh },
  journal={arXiv preprint arXiv:2505.05626},
  year={ 2025 }
}
Comments on this paper