ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.07447
  4. Cited By
StereoCrafter: Diffusion-based Generation of Long and High-fidelity
  Stereoscopic 3D from Monocular Videos

StereoCrafter: Diffusion-based Generation of Long and High-fidelity Stereoscopic 3D from Monocular Videos

11 September 2024
Sijie Zhao
Wenbo Hu
Xiaodong Cun
Yong Zhang
Xiaoyu Li
Zhe Kong
Xiangjun Gao
Muyao Niu
Ying Shan
    VGen
    DiffM
    MDE
ArXivPDFHTML

Papers citing "StereoCrafter: Diffusion-based Generation of Long and High-fidelity Stereoscopic 3D from Monocular Videos"

3 / 3 papers shown
Title
Eye2Eye: A Simple Approach for Monocular-to-Stereo Video Synthesis
Eye2Eye: A Simple Approach for Monocular-to-Stereo Video Synthesis
Michal Geyer
Omer Tov
Linyi Jin
Richard Tucker
Inbar Mosseri
Tali Dekel
Noah Snavely
DiffM
VGen
86
0
0
30 Apr 2025
Vivid4D: Improving 4D Reconstruction from Monocular Video by Video Inpainting
Vivid4D: Improving 4D Reconstruction from Monocular Video by Video Inpainting
Jiaxin Huang
Sheng Miao
BangBnag Yang
Yuewen Ma
Yiyi Liao
VGen
MDE
23
0
0
15 Apr 2025
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
283
5,723
0
29 Apr 2021
1