Recent advances in Large Multimodal Models (LMMs) have expanded their capabilities to video understanding, with Text-to-Video (T2V) models excelling in generating videos from textual prompts. However, they still frequently produce hallucinated content, revealing AI-generated inconsistencies. We introduce ViBe (this https URLa large-scale dataset of hallucinated videos from open-source T2V models. We identify five major hallucination types: Vanishing Subject, Omission Error, Numeric Variability, Subject Dysmorphia, and Visual Incongruity. Using ten T2V models, we generated and manually annotated 3,782 videos from 837 diverse MS COCO captions. Our proposed benchmark includes a dataset of hallucinated videos and a classification framework using video embeddings. ViBe serves as a critical resource for evaluating T2V reliability and advancing hallucination detection. We establish classification as a baseline, with the TimeSFormer + CNN ensemble achieving the best performance (0.345 accuracy, 0.342 F1 score). While initial baselines proposed achieve modest accuracy, this highlights the difficulty of automated hallucination detection and the need for improved methods. Our research aims to drive the development of more robust T2V models and evaluate their outputs based on user preferences.
View on arXiv@article{rawte2025_2411.10867, title={ ViBe: A Text-to-Video Benchmark for Evaluating Hallucination in Large Multimodal Models }, author={ Vipula Rawte and Sarthak Jain and Aarush Sinha and Garv Kaushik and Aman Bansal and Prathiksha Rumale Vishwanath and Samyak Rajesh Jain and Aishwarya Naresh Reganti and Vinija Jain and Aman Chadha and Amit P. Sheth and Amitava Das }, journal={arXiv preprint arXiv:2411.10867}, year={ 2025 } }