124

IGenBench: Benchmarking the Reliability of Text-to-Infographic Generation

Yinghao Tang
Xueding Liu
Boyuan Zhang
Tingfeng Lan
Yupeng Xie
Jiale Lao
Yiyao Wang
Haoxuan Li
Tingting Gao
Bo Pan
Luoxuan Weng
Xiuqi Huang
Minfeng Zhu
Yingchaojie Feng
Yuyu Luo
Wei Chen
Main:8 Pages
48 Figures
Bibliography:4 Pages
2 Tables
Appendix:23 Pages
Abstract

Infographics are composite visual artifacts that combine data visualizations with textual and illustrative elements to communicate information. While recent text-to-image (T2I) models can generate aesthetically appealing images, their reliability in generating infographics remains unclear. Generated infographics may appear correct at first glance but contain easily overlooked issues, such as distorted data encoding or incorrect textual content. We present IGENBENCH, the first benchmark for evaluating the reliability of text-to-infographic generation, comprising 600 curated test cases spanning 30 infographic types. We design an automated evaluation framework that decomposes reliability verification into atomic yes/no questions based on a taxonomy of 10 question types. We employ multimodal large language models (MLLMs) to verify each question, yielding question-level accuracy (Q-ACC) and infographic-level accuracy (I-ACC). We comprehensively evaluate 10 state-of-the-art T2I models on IGENBENCH. Our systematic analysis reveals key insights for future model development: (i) a three-tier performance hierarchy with the top model achieving Q-ACC of 0.90 but I-ACC of only 0.49; (ii) data-related dimensions emerging as universal bottlenecks (e.g., Data Completeness: 0.21); and (iii) the challenge of achieving end-to-end correctness across all models. We release IGENBENCH atthis https URL.

View on arXiv
Comments on this paper