23
0

Zoomer: Adaptive Image Focus Optimization for Black-box MLLM

Abstract

Recent advancements in multimodal large language models (MLLMs) have broadened the scope of vision-language tasks, excelling in applications like image captioning and interactive question-answering. However, these models struggle with accurately processing visual data, particularly in tasks requiring precise object recognition and fine visual details. Stringent token limits often result in the omission of critical information, hampering performance. To address these limitations, we introduce \SysName, a novel visual prompting mechanism designed to enhance MLLM performance while preserving essential visual details within token limits. \SysName features three key innovations: a prompt-aware strategy that dynamically highlights relevant image regions, a spatial-preserving orchestration schema that maintains object integrity, and a budget-aware prompting method that balances global context with crucial visual details. Comprehensive evaluations across multiple datasets demonstrate that \SysName consistently outperforms baseline methods, achieving up to a 26.9%26.9\% improvement in accuracy while significantly reducing token consumption.

View on arXiv
@article{qian2025_2505.00742,
  title={ Zoomer: Adaptive Image Focus Optimization for Black-box MLLM },
  author={ Jiaxu Qian and Chendong Wang and Yifan Yang and Chaoyun Zhang and Huiqiang Jiang and Xufang Luo and Yu Kang and Qingwei Lin and Anlan Zhang and Shiqi Jiang and Ting Cao and Tianjun Mao and Suman Banerjee and Guyue Liu and Saravan Rajmohan and Dongmei Zhang and Yuqing Yang and Qi Zhang and Lili Qiu },
  journal={arXiv preprint arXiv:2505.00742},
  year={ 2025 }
}
Comments on this paper