Where is this coming from? Making groundedness count in the evaluation of Document VQA models

Document Visual Question Answering (VQA) models have evolved at an impressive rate over the past few years, coming close to or matching human performance on some benchmarks. We argue that common evaluation metrics used by popular benchmarks do not account for the semantic and multimodal groundedness of a model's outputs. As a result, hallucinations and major semantic errors are treated the same way as well-grounded outputs, and the evaluation scores do not reflect the reasoning capabilities of the model. In response, we propose a new evaluation methodology that accounts for the groundedness of predictions with regard to the semantic characteristics of the output as well as the multimodal placement of the output within the input document. Our proposed methodology is parameterized in such a way that users can configure the score according to their preferences. We validate our scoring methodology using human judgment and show its potential impact on existing popular leaderboards. Through extensive analyses, we demonstrate that our proposed method produces scores that are a better indicator of a model's robustness and tends to give higher rewards to better-calibrated answers.
View on arXiv@article{nourbakhsh2025_2503.19120, title={ Where is this coming from? Making groundedness count in the evaluation of Document VQA models }, author={ Armineh Nourbakhsh and Siddharth Parekh and Pranav Shetty and Zhao Jin and Sameena Shah and Carolyn Rose }, journal={arXiv preprint arXiv:2503.19120}, year={ 2025 } }