WISE: A World Knowledge-Informed Semantic Evaluation for Text-to-Image Generation
Text-to-Image (T2I) models are capable of generating high-quality artistic creations and visual content. However, existing research and evaluation standards predominantly focus on image realism and shallow text-image alignment, lacking a comprehensive assessment of complex semantic understanding and world knowledge integration in text to image generation. To address this challenge, we propose , the first benchmark specifically designed for orld Knowledge-nformed emantic valuation. WISE moves beyond simple word-pixel mapping by challenging models with 1000 meticulously crafted prompts across 25 sub-domains in cultural common sense, spatio-temporal reasoning, and natural science. To overcome the limitations of traditional CLIP metric, we introduce , a novel quantitative metric for assessing knowledge-image alignment. Through comprehensive testing of 20 models (10 dedicated T2I models and 10 unified multimodal models) using 1,000 structured prompts spanning 25 subdomains, our findings reveal significant limitations in their ability to effectively integrate and apply world knowledge during image generation, highlighting critical pathways for enhancing knowledge incorporation and application in next-generation T2I models. Code and data are available atthis https URL.
View on arXiv@article{niu2025_2503.07265, title={ WISE: A World Knowledge-Informed Semantic Evaluation for Text-to-Image Generation }, author={ Yuwei Niu and Munan Ning and Mengren Zheng and Bin Lin and Peng Jin and Jiaqi Liao and Kunpeng Ning and Bin Zhu and Li Yuan }, journal={arXiv preprint arXiv:2503.07265}, year={ 2025 } }