316
v1v2v3 (latest)

Reinforcing VLMs to Use Tools for Detailed Visual Reasoning Under Resource Constraints

Main:5 Pages
4 Figures
Bibliography:3 Pages
1 Tables
Appendix:2 Pages
Abstract

Despite tremendous recent advances in large model reasoning ability, vision-language models (VLMs) still struggle with detailed visual reasoning, especially when compute resources are limited. To address this challenge, we draw inspiration from methods like Deepseek-r1 for VLMs and train smaller-scale models with Group Relative Policy Optimization (GRPO) to use external tools such as zoom. The greatest benefit is obtained with a combination of GRPO learning, a simple reward structure, a simplified tool-calling interface, allocating additional tokens to the result of the tool call, and a training data mix that over-represents visually difficult examples. Compared to similarly-sized baseline models, our method achieves better performance on some visual question-answering (VQA) tasks, thanks to the detailed visual information gathered from the external tool.

View on arXiv
Comments on this paper