ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.07631
56
0

OWLViz: An Open-World Benchmark for Visual Question Answering

4 March 2025
T. Nguyen
Dang Nguyen
Hoang Nguyen
Thuan Luong
Long Hoang Dang
Viet Dac Lai
    VLM
ArXivPDFHTML
Abstract

We present a challenging benchmark for the Open WorLd VISual question answering (OWLViz) task. OWLViz presents concise, unambiguous queries that require integrating multiple capabilities, including visual understanding, web exploration, and specialized tool usage. While humans achieve 69.2% accuracy on these intuitive tasks, even state-of-the-art VLMs struggle, with the best model, Gemini 2.0, achieving only 26.6% accuracy. Current agentic VLMs, which rely on limited vision and vision-language models as tools, perform even worse. This performance gap reveals significant limitations in multimodal systems' ability to select appropriate tools and execute complex reasoning sequences, establishing new directions for advancing practical AI research.

View on arXiv
@article{nguyen2025_2503.07631,
  title={ OWLViz: An Open-World Benchmark for Visual Question Answering },
  author={ Thuy Nguyen and Dang Nguyen and Hoang Nguyen and Thuan Luong and Long Hoang Dang and Viet Dac Lai },
  journal={arXiv preprint arXiv:2503.07631},
  year={ 2025 }
}
Comments on this paper