ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.00547
68
0

Motion Dreamer: Boundary Conditional Motion Reasoning for Physically Coherent Video Generation

30 November 2024
Tianshuo Xu
Zhifei Chen
Leyi Wu
Hao Lu
Yuying Chen
Lihui Jiang
Bingbing Liu
Yingcong Chen
    VGen
ArXivPDFHTML
Abstract

Recent advances in video generation have shown promise for generating future scenarios, critical for planning and control in autonomous driving and embodied intelligence. However, real-world applications demand more than visually plausible predictions; they require reasoning about object motions based on explicitly defined boundary conditions, such as initial scene image and partial object motion. We term this capability Boundary Conditional Motion Reasoning. Current approaches either neglect explicit user-defined motion constraints, producing physically inconsistent motions, or conversely demand complete motion inputs, which are rarely available in practice. Here we introduce Motion Dreamer, a two-stage framework that explicitly separates motion reasoning from visual synthesis, addressing these limitations. Our approach introduces instance flow, a sparse-to-dense motion representation enabling effective integration of partial user-defined motions, and the motion inpainting strategy to robustly enable reasoning motions of other objects. Extensive experiments demonstrate that Motion Dreamer significantly outperforms existing methods, achieving superior motion plausibility and visual realism, thus bridging the gap towards practical boundary conditional motion reasoning. Our webpage is available:this https URL.

View on arXiv
@article{xu2025_2412.00547,
  title={ Motion Dreamer: Boundary Conditional Motion Reasoning for Physically Coherent Video Generation },
  author={ Tianshuo Xu and Zhifei Chen and Leyi Wu and Hao Lu and Yuying Chen and Lihui Jiang and Bingbing Liu and Yingcong Chen },
  journal={arXiv preprint arXiv:2412.00547},
  year={ 2025 }
}
Comments on this paper