ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.06672
33
0

RAGME: Retrieval Augmented Video Generation for Enhanced Motion Realism

9 April 2025
E. Peruzzo
Dejia Xu
Xingqian Xu
Humphrey Shi
N. Sebe
    DiffM
    VGen
ArXivPDFHTML
Abstract

Video generation is experiencing rapid growth, driven by advances in diffusion models and the development of better and larger datasets. However, producing high-quality videos remains challenging due to the high-dimensional data and the complexity of the task. Recent efforts have primarily focused on enhancing visual quality and addressing temporal inconsistencies, such as flickering. Despite progress in these areas, the generated videos often fall short in terms of motion complexity and physical plausibility, with many outputs either appearing static or exhibiting unrealistic motion. In this work, we propose a framework to improve the realism of motion in generated videos, exploring a complementary direction to much of the existing literature. Specifically, we advocate for the incorporation of a retrieval mechanism during the generation phase. The retrieved videos act as grounding signals, providing the model with demonstrations of how the objects move. Our pipeline is designed to apply to any text-to-video diffusion model, conditioning a pretrained model on the retrieved samples with minimal fine-tuning. We demonstrate the superiority of our approach through established metrics, recently proposed benchmarks, and qualitative results, and we highlight additional applications of the framework.

View on arXiv
@article{peruzzo2025_2504.06672,
  title={ RAGME: Retrieval Augmented Video Generation for Enhanced Motion Realism },
  author={ Elia Peruzzo and Dejia Xu and Xingqian Xu and Humphrey Shi and Nicu Sebe },
  journal={arXiv preprint arXiv:2504.06672},
  year={ 2025 }
}
Comments on this paper