ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.14073
  4. Cited By
VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by
  Using Diffusion Model with ControlNet

VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by Using Diffusion Model with ControlNet

26 July 2023
Zhihao Hu
Dong Xu
    DiffM
    VGen
ArXivPDFHTML

Papers citing "VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by Using Diffusion Model with ControlNet"

50 / 55 papers shown
Title
Subject-driven Video Generation via Disentangled Identity and Motion
Subject-driven Video Generation via Disentangled Identity and Motion
Daneul Kim
Jingxu Zhang
W. Jin
Sunghyun Cho
Qi Dai
Jaesik Park
Chong Luo
DiffM
VGen
103
0
0
23 Apr 2025
Understanding Attention Mechanism in Video Diffusion Models
Understanding Attention Mechanism in Video Diffusion Models
Bingyan Liu
Chengyu Wang
Tongtong Su
Huan Ten
Jun Huang
K. Guo
Kui Jia
VGen
64
0
0
16 Apr 2025
OmniVDiff: Omni Controllable Video Diffusion for Generation and Understanding
OmniVDiff: Omni Controllable Video Diffusion for Generation and Understanding
Dianbing Xi
J. Wang
Yuanzhi Liang
Xi Qiu
Yuchi Huo
R. Wang
Chi Zhang
X. Li
DiffM
VGen
65
0
0
15 Apr 2025
Beyond Wide-Angle Images: Unsupervised Video Portrait Correction via Spatiotemporal Diffusion Adaptation
Beyond Wide-Angle Images: Unsupervised Video Portrait Correction via Spatiotemporal Diffusion Adaptation
Wenbo Nie
Lang Nie
Chunyu Lin
J. Chen
Ke Xing
Jiyuan Wang
Yao Zhao
DiffM
VGen
53
0
0
01 Apr 2025
FullDiT: Multi-Task Video Generative Foundation Model with Full Attention
FullDiT: Multi-Task Video Generative Foundation Model with Full Attention
Xuan Ju
Weicai Ye
Quande Liu
Qiulin Wang
Xintao Wang
Pengfei Wan
Di Zhang
Kun Gai
Qiang Xu
VGen
39
1
0
25 Mar 2025
FragFM: Efficient Fragment-Based Molecular Generation via Discrete Flow Matching
FragFM: Efficient Fragment-Based Molecular Generation via Discrete Flow Matching
Joongwon Lee
Seonghwan Kim
Wou Youn Kim
39
0
0
19 Feb 2025
InterDyn: Controllable Interactive Dynamics with Video Diffusion Models
InterDyn: Controllable Interactive Dynamics with Video Diffusion Models
Rick Akkerman
Haiwen Feng
M. Black
Dimitrios Tzionas
Victoria Fernandez-Abrevaya
VGen
AI4CE
100
3
0
16 Dec 2024
Video Diffusion Transformers are In-Context Learners
Video Diffusion Transformers are In-Context Learners
Zhengcong Fei
Di Qiu
Changqian Yu
Debang Li
Mingyuan Fan
VGen
DiffM
142
2
0
14 Dec 2024
DIVE: Taming DINO for Subject-Driven Video Editing
DIVE: Taming DINO for Subject-Driven Video Editing
Yi Huang
Wei Xiong
He Zhang
Chaoqi Chen
Jianzhuang Liu
Mingfu Yan
Shifeng Chen
VGen
DiffM
76
0
0
04 Dec 2024
SPAgent: Adaptive Task Decomposition and Model Selection for General
  Video Generation and Editing
SPAgent: Adaptive Task Decomposition and Model Selection for General Video Generation and Editing
Rong-Cheng Tu
Wenhao Sun
Zhao Jin
Jingyi Liao
Jiaxing Huang
Dacheng Tao
VGen
DiffM
92
3
0
28 Nov 2024
OnlyFlow: Optical Flow based Motion Conditioning for Video Diffusion
  Models
OnlyFlow: Optical Flow based Motion Conditioning for Video Diffusion Models
Mathis Koroglu
Hugo Caselles-Dupré
Guillaume Jeanneret Sanmiguel
Matthieu Cord
VGen
DiffM
20
1
0
15 Nov 2024
EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video
  Generation
EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation
Xiaofeng Wang
Kang Zhao
F. Liu
Jiayu Wang
Guosheng Zhao
Xiaoyi Bao
Zheng Hua Zhu
Yingya Zhang
Xingang Wang
VGen
56
6
0
13 Nov 2024
ByTheWay: Boost Your Text-to-Video Generation Model to Higher Quality in a Training-free Way
ByTheWay: Boost Your Text-to-Video Generation Model to Higher Quality in a Training-free Way
Jiazi Bu
Pengyang Ling
Pan Zhang
Tong Wu
Xiaoyi Dong
Yuhang Zang
Yuhang Cao
Dahua Lin
Jiaqi Wang
DiffM
VGen
28
0
0
08 Oct 2024
AVID: Adapting Video Diffusion Models to World Models
AVID: Adapting Video Diffusion Models to World Models
Marc Rigter
Tarun Gupta
Agrin Hilmkil
Chao Ma
VGen
17
3
0
01 Oct 2024
Multi-Modal Generative AI: Multi-modal LLM, Diffusion and Beyond
Multi-Modal Generative AI: Multi-modal LLM, Diffusion and Beyond
Hong Chen
Xin Wang
Yuwei Zhou
Bin Huang
Yipeng Zhang
Wei Feng
Houlun Chen
Zeyang Zhang
Siao Tang
Wenwu Zhu
DiffM
44
7
0
23 Sep 2024
EditBoard: Towards a Comprehensive Evaluation Benchmark for Text-Based Video Editing Models
EditBoard: Towards a Comprehensive Evaluation Benchmark for Text-Based Video Editing Models
Yupeng Chen
Penglin Chen
Xiaoyu Zhang
Yixian Huang
Qian Xie
DiffM
36
1
0
15 Sep 2024
AMG: Avatar Motion Guided Video Generation
AMG: Avatar Motion Guided Video Generation
Zhangsihao Yang
Mengyi Shan
Mohammad Farazi
Wenhui Zhu
Yanxi Chen
Xuanzhao Dong
Yalin Wang
VGen
DiffM
64
0
0
02 Sep 2024
IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint
  Video-Depth Generation
IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation
Yuanhao Zhai
K. Lin
Linjie Li
Chung-Ching Lin
Jianfeng Wang
Zhengyuan Yang
David Doermann
Junsong Yuan
Zicheng Liu
Lijuan Wang
DiffM
VGen
21
3
0
15 Jul 2024
Diffusion Model-Based Video Editing: A Survey
Diffusion Model-Based Video Editing: A Survey
Wenhao Sun
Rong-Cheng Tu
Jingyi Liao
Dacheng Tao
VGen
55
22
0
26 Jun 2024
COVE: Unleashing the Diffusion Feature Correspondence for Consistent
  Video Editing
COVE: Unleashing the Diffusion Feature Correspondence for Consistent Video Editing
Jiangshan Wang
Yue Ma
Jiayi Guo
Yicheng Xiao
Gao Huang
Xiu Li
DiffM
23
17
0
13 Jun 2024
HOI-Swap: Swapping Objects in Videos with Hand-Object Interaction
  Awareness
HOI-Swap: Swapping Objects in Videos with Hand-Object Interaction Awareness
Zihui Xue
Mi Luo
Changan Chen
Kristen Grauman
DiffM
22
6
0
11 Jun 2024
NaRCan: Natural Refined Canonical Image with Integration of Diffusion
  Prior for Video Editing
NaRCan: Natural Refined Canonical Image with Integration of Diffusion Prior for Video Editing
Ting-Hsuan Chen
Jiewen Chan
Hau-Shiang Shiu
Shih-Han Yen
Chang-Han Yeh
Yu-Lun Liu
VGen
DiffM
40
3
0
10 Jun 2024
Ctrl-V: Higher Fidelity Video Generation with Bounding-Box Controlled
  Object Motion
Ctrl-V: Higher Fidelity Video Generation with Bounding-Box Controlled Object Motion
Ge Ya Luo
Zhi Hao Luo
Anthony Gosselin
Alexia Jolicoeur-Martineau
Christopher Pal
VGen
DiffM
24
0
0
09 Jun 2024
Turning Text and Imagery into Captivating Visual Video
Turning Text and Imagery into Captivating Visual Video
Mingming Wang
Elijah Miller
VGen
32
0
0
03 Jun 2024
Text Prompting for Multi-Concept Video Customization by Autoregressive
  Generation
Text Prompting for Multi-Concept Video Customization by Autoregressive Generation
D. Kothandaraman
Kihyuk Sohn
Ruben Villegas
P. Voigtlaender
Dinesh Manocha
Mohammad Babaeizadeh
VGen
DiffM
30
2
0
22 May 2024
Enhanced Creativity and Ideation through Stable Video Synthesis
Enhanced Creativity and Ideation through Stable Video Synthesis
Elijah Miller
Thomas Dupont
Mingming Wang
VGen
28
0
0
22 May 2024
Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse
  Controls to Any Diffusion Model
Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse Controls to Any Diffusion Model
Han Lin
Jaemin Cho
Abhaysinh Zala
Mohit Bansal
DiffM
VGen
61
20
0
15 Apr 2024
EVA: Zero-shot Accurate Attributes and Multi-Object Video Editing
EVA: Zero-shot Accurate Attributes and Multi-Object Video Editing
Xiangpeng Yang
Linchao Zhu
Hehe Fan
Yi Yang
DiffM
VGen
14
9
0
24 Mar 2024
Spectral Motion Alignment for Video Motion Transfer using Diffusion
  Models
Spectral Motion Alignment for Video Motion Transfer using Diffusion Models
Geon Yeong Park
Hyeonho Jeong
Sang Wan Lee
Jong Chul Ye
VGen
DiffM
32
10
0
22 Mar 2024
FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation
FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation
Shuai Yang
Yifan Zhou
Ziwei Liu
Chen Change Loy
VGen
DiffM
52
26
0
19 Mar 2024
DreamMotion: Space-Time Self-Similar Score Distillation for Zero-Shot
  Video Editing
DreamMotion: Space-Time Self-Similar Score Distillation for Zero-Shot Video Editing
Hyeonho Jeong
Jinho Chang
Geon Yeong Park
Jong Chul Ye
DiffM
VGen
27
13
0
18 Mar 2024
Intention-driven Ego-to-Exo Video Generation
Intention-driven Ego-to-Exo Video Generation
Hongcheng Luo
Kai Zhu
Wei Zhai
Yang Cao
DiffM
VGen
20
3
0
14 Mar 2024
UniCtrl: Improving the Spatiotemporal Consistency of Text-to-Video
  Diffusion Models via Training-Free Unified Attention Control
UniCtrl: Improving the Spatiotemporal Consistency of Text-to-Video Diffusion Models via Training-Free Unified Attention Control
Xuweiyi Chen
Tian Xia
Sihan Xu
VGen
DiffM
29
8
0
04 Mar 2024
Context-aware Talking Face Video Generation
Context-aware Talking Face Video Generation
Meidai Xuanyuan
Yuwang Wang
Honglei Guo
Qionghai Dai
DiffM
27
0
0
28 Feb 2024
Human Video Translation via Query Warping
Human Video Translation via Query Warping
Haiming Zhu
Yangyang Xu
Shengfeng He
DiffM
27
0
0
19 Feb 2024
ActAnywhere: Subject-Aware Video Background Generation
ActAnywhere: Subject-Aware Video Background Generation
Boxiao Pan
Zhan Xu
Chun-Hao Paul Huang
Krishna Kumar Singh
Yang Zhou
Leonidas J. Guibas
Jimei Yang
VGen
DiffM
24
3
0
19 Jan 2024
TrailBlazer: Trajectory Control for Diffusion-Based Video Generation
TrailBlazer: Trajectory Control for Diffusion-Based Video Generation
W. Ma
J. P. Lewis
W. Kleijn
DiffM
VGen
13
34
0
31 Dec 2023
FlowVid: Taming Imperfect Optical Flows for Consistent Video-to-Video
  Synthesis
FlowVid: Taming Imperfect Optical Flows for Consistent Video-to-Video Synthesis
Feng Liang
Bichen Wu
Jialiang Wang
Licheng Yu
Kunpeng Li
...
Ishan Misra
Jia-Bin Huang
Peizhao Zhang
Peter Vajda
Diana Marculescu
VGen
DiffM
24
32
0
29 Dec 2023
A Video is Worth 256 Bases: Spatial-Temporal Expectation-Maximization
  Inversion for Zero-Shot Video Editing
A Video is Worth 256 Bases: Spatial-Temporal Expectation-Maximization Inversion for Zero-Shot Video Editing
Maomao Li
Yu Li
Tianyu Yang
Yunfei Liu
Dongxu Yue
Zhihui Lin
Dong Xu
VGen
10
8
0
10 Dec 2023
Fine-grained Controllable Video Generation via Object Appearance and
  Context
Fine-grained Controllable Video Generation via Object Appearance and Context
Hsin-Ping Huang
Yu-Chuan Su
Deqing Sun
Lu Jiang
Xuhui Jia
Yukun Zhu
Ming-Hsuan Yang
DiffM
VGen
13
13
0
05 Dec 2023
VideoBooth: Diffusion-based Video Generation with Image Prompts
VideoBooth: Diffusion-based Video Generation with Image Prompts
Yuming Jiang
Tianxing Wu
Shuai Yang
Chenyang Si
Dahua Lin
Yu Qiao
Chen Change Loy
Ziwei Liu
DiffM
VGen
32
65
0
01 Dec 2023
VBench: Comprehensive Benchmark Suite for Video Generative Models
VBench: Comprehensive Benchmark Suite for Video Generative Models
Ziqi Huang
Yinan He
Jiashuo Yu
Fan Zhang
Chenyang Si
...
Xinyuan Chen
Limin Wang
Dahua Lin
Yu Qiao
Ziwei Liu
VGen
62
346
0
29 Nov 2023
Flow-Guided Diffusion for Video Inpainting
Flow-Guided Diffusion for Video Inpainting
Bohai Gu
Yongsheng Yu
Hengrui Fan
Libo Zhang
VGen
DiffM
28
12
0
26 Nov 2023
I2VGen-XL: High-Quality Image-to-Video Synthesis via Cascaded Diffusion
  Models
I2VGen-XL: High-Quality Image-to-Video Synthesis via Cascaded Diffusion Models
Shiwei Zhang
Jiayu Wang
Yingya Zhang
Kang Zhao
Hangjie Yuan
Z. Qin
Xiang Wang
Deli Zhao
Jingren Zhou
DiffM
VGen
26
198
0
07 Nov 2023
LatentWarp: Consistent Diffusion Latents for Zero-Shot Video-to-Video
  Translation
LatentWarp: Consistent Diffusion Latents for Zero-Shot Video-to-Video Translation
Yuxiang Bao
Di Qiu
Guoliang Kang
Baochang Zhang
Bo Jin
Kaiye Wang
Pengfei Yan
VGen
DiffM
22
7
0
01 Nov 2023
A Survey on Video Diffusion Models
A Survey on Video Diffusion Models
Zhen Xing
Qijun Feng
Haoran Chen
Qi Dai
Hang-Rui Hu
Hang Xu
Zuxuan Wu
Yu-Gang Jiang
EGVM
VGen
55
115
0
16 Oct 2023
LOVECon: Text-driven Training-Free Long Video Editing with ControlNet
LOVECon: Text-driven Training-Free Long Video Editing with ControlNet
Zhenyi Liao
Zhijie Deng
DiffM
13
7
0
15 Oct 2023
ConditionVideo: Training-Free Condition-Guided Text-to-Video Generation
ConditionVideo: Training-Free Condition-Guided Text-to-Video Generation
Bo Peng
Xinyuan Chen
Yaohui Wang
Chaochao Lu
Yu Qiao
DiffM
VGen
14
7
0
11 Oct 2023
Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image
  Diffusion Models
Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models
Hyeonho Jeong
Jong Chul Ye
DiffM
VGen
20
41
0
02 Oct 2023
Context-PIPs: Persistent Independent Particles Demands Spatial Context
  Features
Context-PIPs: Persistent Independent Particles Demands Spatial Context Features
Weikang Bian
Zhaoyang Huang
Xiaoyu Shi
Yitong Dong
Yijin Li
Hongsheng Li
19
6
0
03 Jun 2023
12
Next