ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.13792
53
0

Identifying and Mitigating Position Bias of Multi-image Vision-Language Models

18 March 2025
Xinyu Tian
Shu Zou
Zhaoyuan Yang
Jing Zhang
ArXivPDFHTML
Abstract

The evolution of Large Vision-Language Models (LVLMs) has progressed from single to multi-image reasoning. Despite this advancement, our findings indicate that LVLMs struggle to robustly utilize information across multiple images, with predictions significantly affected by the alteration of image positions. To further explore this issue, we introduce Position-wise Question Answering (PQA), a meticulously designed task to quantify reasoning capabilities at each position. Our analysis reveals a pronounced position bias in LVLMs: open-source models excel in reasoning with images positioned later but underperform with those in the middle or at the beginning, while proprietary models show improved comprehension for images at the beginning and end but struggle with those in the middle. Motivated by this, we propose SoFt Attention (SoFA), a simple, training-free approach that mitigates this bias by employing linear interpolation between inter-image causal attention and bidirectional counterparts. Experimental results demonstrate that SoFA reduces position bias and enhances the reasoning performance of existing LVLMs.

View on arXiv
@article{tian2025_2503.13792,
  title={ Identifying and Mitigating Position Bias of Multi-image Vision-Language Models },
  author={ Xinyu Tian and Shu Zou and Zhaoyuan Yang and Jing Zhang },
  journal={arXiv preprint arXiv:2503.13792},
  year={ 2025 }
}
Comments on this paper