33
0

Resampling Benchmark for Efficient Comprehensive Evaluation of Large Vision-Language Models

Abstract

We propose an efficient evaluation protocol for large vision-language models (VLMs). Given their broad knowledge and reasoning capabilities, multiple benchmarks are needed for comprehensive assessment, making evaluation computationally expensive. To improve efficiency, we construct a subset that yields results comparable to full benchmark evaluations. Our benchmark classification experiments reveal that no single benchmark fully covers all challenges. We then introduce a subset construction method using farthest point sampling (FPS). Our experiments show that FPS-based benchmarks maintain a strong correlation (> 0.96) with full evaluations while using only ~1\% of the data. Additionally, applying FPS to an existing benchmark improves correlation with overall evaluation results, suggesting its potential to reduce unintended dataset biases.

View on arXiv
@article{suzuki2025_2504.09979,
  title={ Resampling Benchmark for Efficient Comprehensive Evaluation of Large Vision-Language Models },
  author={ Teppei Suzuki and Keisuke Ozawa },
  journal={arXiv preprint arXiv:2504.09979},
  year={ 2025 }
}
Comments on this paper