ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.06708
  4. Cited By
Neutral Editing Framework for Diffusion-based Video Editing

Neutral Editing Framework for Diffusion-based Video Editing

10 December 2023
Sunjae Yoon
Gwanhyeong Koo
Jiajing Hong
Changdong Yoo
    VGen
    DiffM
ArXivPDFHTML

Papers citing "Neutral Editing Framework for Diffusion-based Video Editing"

5 / 5 papers shown
Title
C-TPT: Calibrated Test-Time Prompt Tuning for Vision-Language Models via
  Text Feature Dispersion
C-TPT: Calibrated Test-Time Prompt Tuning for Vision-Language Models via Text Feature Dispersion
Hee Suk Yoon
Eunseop Yoon
Joshua Tian Jin Tee
M. Hasegawa-Johnson
Yingzhen Li
C. Yoo
VLM
53
23
0
21 Mar 2024
Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image
  Generation
Pick-a-Pic: An Open Dataset of User Preferences for Text-to-Image Generation
Yuval Kirstain
Adam Polyak
Uriel Singer
Shahbuland Matiana
Joe Penna
Omer Levy
EGVM
163
349
0
02 May 2023
Video-P2P: Video Editing with Cross-attention Control
Video-P2P: Video Editing with Cross-attention Control
Shaoteng Liu
Yuechen Zhang
Wenbo Li
Zhe-nan Lin
Jiaya Jia
DiffM
VGen
133
202
0
08 Mar 2023
CogVideo: Large-scale Pretraining for Text-to-Video Generation via
  Transformers
CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers
Wenyi Hong
Ming Ding
Wendi Zheng
Xinghan Liu
Jie Tang
DiffM
243
564
0
29 May 2022
RePaint: Inpainting using Denoising Diffusion Probabilistic Models
RePaint: Inpainting using Denoising Diffusion Probabilistic Models
Andreas Lugmayr
Martin Danelljan
Andrés Romero
F. I. F. Richard Yu
Radu Timofte
Luc Van Gool
DiffM
211
1,353
0
24 Jan 2022
1