OmniGenBench: A Benchmark for Omnipotent Multimodal Generation across 50+ Tasks

Recent breakthroughs in large multimodal models (LMMs), such as the impressive GPT-4o-Native, have demonstrated remarkable proficiency in following general-purpose instructions for image generation. However, current benchmarks often lack the necessary breadth and depth to fully evaluate the diverse capabilities of these models. To overcome this limitation, we introduce OmniGenBench, a novel and comprehensive benchmark meticulously designed to assess the instruction-following abilities of state-of-the-art LMMs across both perception-centric and cognition-centric dimensions. Our OmniGenBench includes 57 diverse sub-tasks grounded in real-world scenarios, systematically categorized according to the specific model capabilities they demand. For rigorous evaluation, we further employ a dual-mode protocol. This protocol utilizes off-the-shelf visual parsing tools for perception-centric tasks and a powerful LLM-based judger for cognition-centric tasks to assess the alignment between generated images and user instructions. Using OmniGenBench, we evaluate mainstream generative models, including prevalent models like GPT-4o, Gemini-2.0-Flash, and Seedream, and provide in-depth comparisons and analyses of theirthis http URLand data are available atthis https URL.
View on arXiv@article{wang2025_2505.18775, title={ OmniGenBench: A Benchmark for Omnipotent Multimodal Generation across 50+ Tasks }, author={ Jiayu Wang and Yang Jiao and Yue Yu and Tianwen Qian and Shaoxiang Chen and Jingjing Chen and Yu-Gang Jiang }, journal={arXiv preprint arXiv:2505.18775}, year={ 2025 } }