504

VAQUUM: Are Vague Quantifiers Grounded in Visual Data?

Annual Meeting of the Association for Computational Linguistics (ACL), 2025
Main:8 Pages
7 Figures
Bibliography:5 Pages
7 Tables
Appendix:4 Pages
Abstract

Vague quantifiers such as "a few" and "many" are influenced by many contextual factors, including how many objects are present in a given context. In this work, we evaluate the extent to which vision-and-language models (VLMs) are compatible with humans when producing or judging the appropriateness of vague quantifiers in visual contexts. We release a novel dataset, VAQUUM, containing 20300 human ratings on quantified statements across a total of 1089 images. Using this dataset, we compare human judgments and VLM predictions using three different evaluation methods. Our findings show that VLMs, like humans, are influenced by object counts in vague quantifier use. However, we find significant inconsistencies across models in different evaluation settings, suggesting that judging and producing vague quantifiers rely on two different processes.

View on arXiv
Comments on this paper