ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.00500
39
2

Video Latent Flow Matching: Optimal Polynomial Projections for Video Interpolation and Extrapolation

1 February 2025
Yang Cao
Zhao-quan Song
Chiwun Yang
    VGen
ArXivPDFHTML
Abstract

This paper considers an efficient video modeling process called Video Latent Flow Matching (VLFM). Unlike prior works, which randomly sampled latent patches for video generation, our method relies on current strong pre-trained image generation models, modeling a certain caption-guided flow of latent patches that can be decoded to time-dependent video frames. We first speculate multiple images of a video are differentiable with respect to time in some latent space. Based on this conjecture, we introduce the HiPPO framework to approximate the optimal projection for polynomials to generate the probability path. Our approach gains the theoretical benefits of the bounded universal approximation error and timescale robustness. Moreover, VLFM processes the interpolation and extrapolation abilities for video generation with arbitrary frame rates. We conduct experiments on several text-to-video datasets to showcase the effectiveness of our method.

View on arXiv
@article{cao2025_2502.00500,
  title={ Video Latent Flow Matching: Optimal Polynomial Projections for Video Interpolation and Extrapolation },
  author={ Yang Cao and Zhao Song and Chiwun Yang },
  journal={arXiv preprint arXiv:2502.00500},
  year={ 2025 }
}
Comments on this paper