ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.03190
  4. Cited By
Text-Guided Synthesis of Eulerian Cinemagraphs

Text-Guided Synthesis of Eulerian Cinemagraphs

6 July 2023
Aniruddha Mahapatra
Aliaksandr Siarohin
Hsin-Ying Lee
Sergey Tulyakov
Junchen Zhu
    DiffM
    VGen
ArXivPDFHTML

Papers citing "Text-Guided Synthesis of Eulerian Cinemagraphs"

12 / 12 papers shown
Title
MotionCanvas: Cinematic Shot Design with Controllable Image-to-Video Generation
MotionCanvas: Cinematic Shot Design with Controllable Image-to-Video Generation
Jinbo Xing
Long Mai
Cusuh Ham
Jiahui Huang
Aniruddha Mahapatra
Chi-Wing Fu
T. Wong
Feng Liu
DiffM
VGen
105
2
0
06 Feb 2025
Multi-subject Open-set Personalization in Video Generation
Multi-subject Open-set Personalization in Video Generation
Tsai-Shien Chen
Aliaksandr Siarohin
Willi Menapace
Yuwei Fang
Kwot Sin Lee
Ivan Skorokhodov
Kfir Aberman
Jun-Yan Zhu
Ming Yang
Sergey Tulyakov
DiffM
VGen
65
7
0
10 Jan 2025
LoopGaussian: Creating 3D Cinemagraph with Multi-view Images via
  Eulerian Motion Field
LoopGaussian: Creating 3D Cinemagraph with Multi-view Images via Eulerian Motion Field
Jiyang Li
Lechao Cheng
Zhangye Wang
Tingting Mu
Jingxuan He
3DGS
20
4
0
13 Apr 2024
Motion-I2V: Consistent and Controllable Image-to-Video Generation with
  Explicit Motion Modeling
Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling
Xiaoyu Shi
Zhaoyang Huang
Fu-Yun Wang
Weikang Bian
Dasong Li
...
Ka Chun Cheung
Simon See
Hongwei Qin
Jifeng Da
Hongsheng Li
VGen
DiffM
30
78
0
29 Jan 2024
Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion
  Models
Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models
Jiarui Xu
Sifei Liu
Arash Vahdat
Wonmin Byeon
Xiaolong Wang
Shalini De Mello
VLM
198
318
0
08 Mar 2023
CogVideo: Large-scale Pretraining for Text-to-Video Generation via
  Transformers
CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers
Wenyi Hong
Ming Ding
Wendi Zheng
Xinghan Liu
Jie Tang
DiffM
235
556
0
29 May 2022
Pretraining is All You Need for Image-to-Image Translation
Pretraining is All You Need for Image-to-Image Translation
Tengfei Wang
Ting Zhang
Bo Zhang
Hao Ouyang
Dong Chen
Qifeng Chen
Fang Wen
DiffM
176
147
0
25 May 2022
BLIP: Bootstrapping Language-Image Pre-training for Unified
  Vision-Language Understanding and Generation
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Junnan Li
Dongxu Li
Caiming Xiong
S. Hoi
MLLM
BDL
VLM
CLIP
380
4,010
0
28 Jan 2022
Controllable Animation of Fluid Elements in Still Images
Controllable Animation of Fluid Elements in Still Images
Aniruddha Mahapatra
K. Kulkarni
VGen
31
44
0
06 Dec 2021
Palette: Image-to-Image Diffusion Models
Palette: Image-to-Image Diffusion Models
Chitwan Saharia
William Chan
Huiwen Chang
Chris A. Lee
Jonathan Ho
Tim Salimans
David J. Fleet
Mohammad Norouzi
DiffM
VLM
320
1,570
0
10 Nov 2021
A Style-Based Generator Architecture for Generative Adversarial Networks
A Style-Based Generator Architecture for Generative Adversarial Networks
Tero Karras
S. Laine
Timo Aila
262
10,183
0
12 Dec 2018
Image-to-Image Translation with Conditional Adversarial Networks
Image-to-Image Translation with Conditional Adversarial Networks
Phillip Isola
Jun-Yan Zhu
Tinghui Zhou
Alexei A. Efros
SSeg
203
19,191
0
21 Nov 2016
1