72
0

VisualQuest: A Diverse Image Dataset for Evaluating Visual Recognition in LLMs

Abstract

This paper introduces VisualQuest, a novel image dataset designed to assess the ability of large language models (LLMs) to interpret non-traditional, stylized imagery. Unlike conventional photographic benchmarks, VisualQuest challenges models with images that incorporate abstract, symbolic, and metaphorical elements, requiring the integration of domain-specific knowledge and advanced reasoning. The dataset was meticulously curated through multiple stages of filtering, annotation, and standardization to ensure high quality and diversity. Our evaluations using several state-of-the-art multimodal LLMs reveal significant performance variations that underscore the importance of both factual background knowledge and inferential capabilities in visual recognition tasks. VisualQuest thus provides a robust and comprehensive benchmark for advancing research in multimodal reasoning and model architecture design.

View on arXiv
@article{xiao2025_2503.19936,
  title={ VisualQuest: A Diverse Image Dataset for Evaluating Visual Recognition in LLMs },
  author={ Kelaiti Xiao and Liang Yang and Paerhati Tulajiang and Hongfei Lin },
  journal={arXiv preprint arXiv:2503.19936},
  year={ 2025 }
}
Comments on this paper