Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2007.12130
Cited By
Sound2Sight: Generating Visual Dynamics from Sound and Context
23 July 2020
A. Cherian
Moitreya Chatterjee
N. Ahuja
VGen
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Sound2Sight: Generating Visual Dynamics from Sound and Context"
6 / 6 papers shown
Title
Seeing Soundscapes: Audio-Visual Generation and Separation from Soundscapes Using Audio-Visual Separator
Minjae Kang
Martim Brandão
56
0
0
25 Apr 2025
X-Drive: Cross-modality consistent multi-sensor data synthesis for driving scenarios
Yichen Xie
Chenfeng Xu
C-T.John Peng
Shuqi Zhao
Nhat Ho
Alexander T. Pham
Mingyu Ding
M. Tomizuka
W. Zhan
DiffM
31
2
0
02 Nov 2024
Long Video Generation with Time-Agnostic VQGAN and Time-Sensitive Transformer
Songwei Ge
Thomas Hayes
Harry Yang
Xiaoyue Yin
Guan Pang
David Jacobs
Jia-Bin Huang
Devi Parikh
ViT
18
214
0
07 Apr 2022
A Hierarchical Variational Neural Uncertainty Model for Stochastic Video Prediction
Moitreya Chatterjee
N. Ahuja
A. Cherian
UQCV
VGen
BDL
16
17
0
06 Oct 2021
Imagine This! Scripts to Compositions to Videos
Tanmay Gupta
Dustin Schwenk
Ali Farhadi
Derek Hoiem
Aniruddha Kembhavi
CoGe
VGen
109
76
0
10 Apr 2018
Discriminative Regularization for Generative Models
Alex Lamb
Vincent Dumoulin
Aaron Courville
DRL
19
65
0
09 Feb 2016
1