ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.11874
41
0

VAQUUM: Are Vague Quantifiers Grounded in Visual Data?

17 February 2025
Hugh Mee Wong
Rick Nouwen
Albert Gatt
ArXivPDFHTML
Abstract

Vague quantifiers such as "a few" and "many" are influenced by many contextual factors, including how many objects are present in a given context. In this work, we evaluate the extent to which vision-and-language models (VLMs) are compatible with humans when producing or judging the appropriateness of vague quantifiers in visual contexts. We release a novel dataset, VAQUUM, containing 20300 human ratings on quantified statements across a total of 1089 images. Using this dataset, we compare human judgments and VLM predictions using three different evaluation methods. Our findings show that VLMs, like humans, are influenced by object counts in vague quantifier use. However, we find significant inconsistencies across models in different evaluation settings, suggesting that judging and producing vague quantifiers rely on two different processes.

View on arXiv
@article{wong2025_2502.11874,
  title={ VAQUUM: Are Vague Quantifiers Grounded in Visual Data? },
  author={ Hugh Mee Wong and Rick Nouwen and Albert Gatt },
  journal={arXiv preprint arXiv:2502.11874},
  year={ 2025 }
}
Comments on this paper