T2VTextBench: A Human Evaluation Benchmark for Textual Control in Video Generation Models

Thanks to recent advancements in scalable deep architectures and large-scale pretraining, text-to-video generation has achieved unprecedented capabilities in producing high-fidelity, instruction-following content across a wide range of styles, enabling applications in advertising, entertainment, and education. However, these models' ability to render precise on-screen text, such as captions or mathematical formulas, remains largely untested, posing significant challenges for applications requiring exact textual accuracy. In this work, we introduce T2VTextBench, the first human-evaluation benchmark dedicated to evaluating on-screen text fidelity and temporal consistency in text-to-video models. Our suite of prompts integrates complex text strings with dynamic scene changes, testing each model's ability to maintain detailed instructions across frames. We evaluate ten state-of-the-art systems, ranging from open-source solutions to commercial offerings, and find that most struggle to generate legible, consistent text. These results highlight a critical gap in current video generators and provide a clear direction for future research aimed at enhancing textual manipulation in video synthesis.
View on arXiv@article{guo2025_2505.04946, title={ T2VTextBench: A Human Evaluation Benchmark for Textual Control in Video Generation Models }, author={ Xuyang Guo and Jiayan Huo and Zhenmei Shi and Zhao Song and Jiahao Zhang and Jiale Zhao }, journal={arXiv preprint arXiv:2505.04946}, year={ 2025 } }