102

Judge Model for Large-scale Multimodality Benchmarks

Min-Han Shih
Yu-Hsin Wu
Yu-Wei Chen
Main:6 Pages
4 Figures
Bibliography:2 Pages
8 Tables
Appendix:1 Pages
Abstract

We propose a dedicated multimodal Judge Model designed to provide reliable, explainable evaluation across a diverse suite of tasks. Our benchmark spans text, audio, image, and video modalities, drawing from carefully sampled public datasets with fixed seeds to ensure reproducibility and minimize train test leakage. Instead of simple scoring, our framework aggregates multimodal judgments, analyzes the quality and reasoning consistency of model outputs, and generates diagnostic feedback. We evaluate several MLLMs, including Gemini 2.5, Phi 4, and Qwen 2.5, across 280 multimodal samples and compare judge model assessments with human annotators. Results show strong alignment between the Judge Model and human scores, demonstrating its potential as a scalable, interpretable evaluation pipeline for future multimodal AI research.

View on arXiv
Comments on this paper