24
111

VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models

Haodong Duan
Junming Yang
Junming Yang
Xinyu Fang
Lin Chen
Yuan Liu
Xiao-wen Dong
Yuhang Zang
Pan Zhang
Jiaqi Wang
Yubo Ma
Kai Chen
Yifan Zhang
Shiyin Lu
Tack Hwa Wong
Weiyun Wang
Peiheng Zhou
Xiaozhe Li
Chaoyou Fu
Junbo Cui
Xiaoyi Dong
Yuhang Zang
Pan Zhang
Jiaqi Wang
Dahua Lin
Kai Chen
Abstract

We present VLMEvalKit: an open-source toolkit for evaluating large multi-modality models based on PyTorch. The toolkit aims to provide a user-friendly and comprehensive framework for researchers and developers to evaluate existing multi-modality models and publish reproducible evaluation results. In VLMEvalKit, we implement over 70 different large multi-modality models, including both proprietary APIs and open-source models, as well as more than 20 different multi-modal benchmarks. By implementing a single interface, new models can be easily added to the toolkit, while the toolkit automatically handles the remaining workloads, including data preparation, distributed inference, prediction post-processing, and metric calculation. Although the toolkit is currently mainly used for evaluating large vision-language models, its design is compatible with future updates that incorporate additional modalities, such as audio and video. Based on the evaluation results obtained with the toolkit, we host OpenVLM Leaderboard, a comprehensive leaderboard to track the progress of multi-modality learning research. The toolkit is released atthis https URLand is actively maintained.

View on arXiv
@article{duan2025_2407.11691,
  title={ VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models },
  author={ Haodong Duan and Xinyu Fang and Junming Yang and Xiangyu Zhao and Yuxuan Qiao and Mo Li and Amit Agarwal and Zhe Chen and Lin Chen and Yuan Liu and Yubo Ma and Hailong Sun and Yifan Zhang and Shiyin Lu and Tack Hwa Wong and Weiyun Wang and Peiheng Zhou and Xiaozhe Li and Chaoyou Fu and Junbo Cui and Xiaoyi Dong and Yuhang Zang and Pan Zhang and Jiaqi Wang and Dahua Lin and Kai Chen },
  journal={arXiv preprint arXiv:2407.11691},
  year={ 2025 }
}
Comments on this paper