VoQA: Visual-only Question Answering

We propose Visual-only Question Answering (VoQA), a novel multimodal task in which questions are visually embedded within images, without any accompanying textual input. This requires models to locate, recognize, and reason over visually embedded textual questions, posing challenges for existing large vision-language models (LVLMs), which show notable performance drops even with carefully designed prompts. To bridge this gap, we introduce Guided Response Triggering Supervised Fine-tuning (GRT-SFT), a structured fine-tuning strategy that guides the model to perform step-by-step reasoning purely based on visual input, significantly improving model performance. Our work enhances models' capacity for human-like visual understanding in complex multimodal scenarios, where information, including language, is perceived visually.
View on arXiv@article{jiang2025_2505.14227, title={ VoQA: Visual-only Question Answering }, author={ Luyang Jiang and Jianing An and Jie Luo and Wenjun Wu and Lei Huang }, journal={arXiv preprint arXiv:2505.14227}, year={ 2025 } }