We'll Fix it in Post: Improving Text-to-Video Generation with Neuro-Symbolic Feedback

Current text-to-video (T2V) generation models are increasingly popular due to their ability to produce coherent videos from textual prompts. However, these models often struggle to generate semantically and temporally consistent videos when dealing with longer, more complex prompts involving multiple objects or sequential events. Additionally, the high computational costs associated with training or fine-tuning make direct improvements impractical. To overcome these limitations, we introduce NeuS-E, a novel zero-training video refinement pipeline that leverages neuro-symbolic feedback to automatically enhance video generation, achieving superior alignment with the prompts. Our approach first derives the neuro-symbolic feedback by analyzing a formal video representation and pinpoints semantically inconsistent events, objects, and their corresponding frames. This feedback then guides targeted edits to the original video. Extensive empirical evaluations on both open-source and proprietary T2V models demonstrate that NeuS-E significantly enhances temporal and logical alignment across diverse prompts by almost 40%
View on arXiv@article{choi2025_2504.17180, title={ We'll Fix it in Post: Improving Text-to-Video Generation with Neuro-Symbolic Feedback }, author={ Minkyu Choi and S P Sharan and Harsh Goel and Sahil Shah and Sandeep Chinchali }, journal={arXiv preprint arXiv:2504.17180}, year={ 2025 } }