50
0

AVA-Bench: Atomic Visual Ability Benchmark for Vision Foundation Models

Main:10 Pages
31 Figures
Bibliography:7 Pages
4 Tables
Appendix:17 Pages
Abstract

The rise of vision foundation models (VFMs) calls for systematic evaluation. A common approach pairs VFMs with large language models (LLMs) as general-purpose heads, followed by evaluation on broad Visual Question Answering (VQA) benchmarks. However, this protocol has two key blind spots: (i) the instruction tuning data may not align with VQA test distributions, meaning a wrong prediction can stem from such data mismatch rather than a VFM' visual shortcomings; (ii) VQA benchmarks often require multiple visual abilities, making it hard to tell whether errors stem from lacking all required abilities or just a single critical one. To address these gaps, we introduce AVA-Bench, the first benchmark that explicitly disentangles 14 Atomic Visual Abilities (AVAs) -- foundational skills like localization, depth estimation, and spatial understanding that collectively support complex visual reasoning tasks. By decoupling AVAs and matching training and test distributions within each, AVA-Bench pinpoints exactly where a VFM excels or falters. Applying AVA-Bench to leading VFMs thus reveals distinctive "ability fingerprints," turning VFM selection from educated guesswork into principled engineering. Notably, we find that a 0.5B LLM yields similar VFM rankings as a 7B LLM while cutting GPU hours by 8x, enabling more efficient evaluation. By offering a comprehensive and transparent benchmark, we hope AVA-Bench lays the foundation for the next generation of VFMs.

View on arXiv
@article{mai2025_2506.09082,
  title={ AVA-Bench: Atomic Visual Ability Benchmark for Vision Foundation Models },
  author={ Zheda Mai and Arpita Chowdhury and Zihe Wang and Sooyoung Jeon and Lemeng Wang and Jiacheng Hou and Jihyung Kil and Wei-Lun Chao },
  journal={arXiv preprint arXiv:2506.09082},
  year={ 2025 }
}
Comments on this paper