Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2408.15239
Cited By
Generative Inbetweening: Adapting Image-to-Video Models for Keyframe Interpolation
27 August 2024
Xiaojuan Wang
Boyang Zhou
Brian L. Curless
Ira Kemelmacher-Shlizerman
Aleksander Holynski
Steven M. Seitz
DiffM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Generative Inbetweening: Adapting Image-to-Video Models for Keyframe Interpolation"
7 / 7 papers shown
Title
EGVD: Event-Guided Video Diffusion Model for Physically Realistic Large-Motion Frame Interpolation
Ziran Zhang
Xiaohui Li
Yihao Liu
Yujin Wang
Yueting Chen
Tianfan Xue
Shi Guo
DiffM
VGen
90
0
0
26 Mar 2025
Adapting Image-to-Video Diffusion Models for Large-Motion Frame Interpolation
Luoxu Jin
Hiroshi Watanabe
DiffM
VGen
87
0
0
22 Dec 2024
Generative Inbetweening through Frame-wise Conditions-Driven Video Generation
Tianyi Zhu
Dongwei Ren
Qilong Wang
Xiaohe Wu
W. Zuo
VGen
67
1
0
16 Dec 2024
Generating 3D-Consistent Videos from Unposed Internet Photos
Gene Chou
Kai Zhang
Sai Bi
Hao Tan
Zexiang Xu
Fujun Luan
Bharath Hariharan
Noah Snavely
3DGS
VGen
75
3
0
20 Nov 2024
Framer: Interactive Frame Interpolation
Wen Wang
Qiuyu Wang
Kecheng Zheng
Hao Ouyang
Zhekai Chen
Biao Gong
Hao Chen
Yujun Shen
Chunhua Shen
VGen
61
4
0
24 Oct 2024
ViBiDSampler: Enhancing Video Interpolation Using Bidirectional Diffusion Sampler
Serin Yang
Taesung Kwon
Jong Chul Ye
VGen
DiffM
27
3
0
08 Oct 2024
Redefining Temporal Modeling in Video Diffusion: The Vectorized Timestep Approach
Yaofang Liu
Y. Ren
Xiaodong Cun
Aitor Artola
Yang Liu
Tieyong Zeng
Raymond H. Chan
Jean-Michel Morel
VGen
DiffM
46
2
0
04 Oct 2024
1