Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2401.16347
Cited By
Cross-Modal Coordination Across a Diverse Set of Input Modalities
29 January 2024
Jorge Sánchez
Rodrigo Laguna
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Cross-Modal Coordination Across a Diverse Set of Input Modalities"
6 / 6 papers shown
Title
Edit Everything: A Text-Guided Generative System for Images Editing
Defeng Xie
Ruichen Wang
Jiancang Ma
Chen Chen
H. Lu
D. Yang
Fobo Shi
Xiaodong Lin
DiffM
80
31
0
27 Apr 2023
SpeechCLIP: Integrating Speech with Pre-Trained Vision and Language Model
Yi-Jen Shih
Hsuan-Fu Wang
Heng-Jui Chang
Layne Berry
Hung-yi Lee
David F. Harwath
VLM
CLIP
38
32
0
03 Oct 2022
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Junnan Li
Dongxu Li
Caiming Xiong
S. Hoi
MLLM
BDL
VLM
CLIP
382
4,010
0
28 Jan 2022
Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
Chao Jia
Yinfei Yang
Ye Xia
Yi-Ting Chen
Zarana Parekh
Hieu H. Pham
Quoc V. Le
Yun-hsuan Sung
Zhen Li
Tom Duerig
VLM
CLIP
293
3,683
0
11 Feb 2021
Probabilistic Embeddings for Cross-Modal Retrieval
Sanghyuk Chun
Seong Joon Oh
Rafael Sampaio de Rezende
Yannis Kalantidis
Diane Larlus
UQCV
399
197
0
13 Jan 2021
Learning Deep Representations of Fine-grained Visual Descriptions
Scott E. Reed
Zeynep Akata
Bernt Schiele
Honglak Lee
OCL
VLM
160
841
0
17 May 2016
1