55
0

HRScene: How Far Are VLMs from Effective High-Resolution Image Understanding?

Abstract

High-resolution image (HRI) understanding aims to process images with a large number of pixels, such as pathological images and agricultural aerial images, both of which can exceed 1 million pixels. Vision Large Language Models (VLMs) can allegedly handle HRIs, however, there is a lack of a comprehensive benchmark for VLMs to evaluate HRI understanding. To address this gap, we introduce HRScene, a novel unified benchmark for HRI understanding with rich scenes. HRScene incorporates 25 real-world datasets and 2 synthetic diagnostic datasets with resolutions ranging from 1,024 ×\times 1,024 to 35,503 ×\times 26,627. HRScene is collected and re-annotated by 10 graduate-level annotators, covering 25 scenarios, ranging from microscopic to radiology images, street views, long-range pictures, and telescope images. It includes HRIs of real-world objects, scanned documents, and composite multi-image. The two diagnostic evaluation datasets are synthesized by combining the target image with the gold answer and distracting images in different orders, assessing how well models utilize regions in HRI. We conduct extensive experiments involving 28 VLMs, including Gemini 2.0 Flash and GPT-4o. Experiments on HRScene show that current VLMs achieve an average accuracy of around 50% on real-world tasks, revealing significant gaps in HRI understanding. Results on synthetic datasets reveal that VLMs struggle to effectively utilize HRI regions, showing significant Regional Divergence and lost-in-middle, shedding light on future research.

View on arXiv
@article{zhang2025_2504.18406,
  title={ HRScene: How Far Are VLMs from Effective High-Resolution Image Understanding? },
  author={ Yusen Zhang and Wenliang Zheng and Aashrith Madasu and Peng Shi and Ryo Kamoi and Hao Zhou and Zhuoyang Zou and Shu Zhao and Sarkar Snigdha Sarathi Das and Vipul Gupta and Xiaoxin Lu and Nan Zhang and Ranran Haoran Zhang and Avitej Iyer and Renze Lou and Wenpeng Yin and Rui Zhang },
  journal={arXiv preprint arXiv:2504.18406},
  year={ 2025 }
}
Comments on this paper