Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2206.09358
Cited By
What is Where by Looking: Weakly-Supervised Open-World Phrase-Grounding without Text Inputs
19 June 2022
Tal Shaharabany
Yoad Tewel
Lior Wolf
ObjD
Re-assign community
ArXiv
PDF
HTML
Papers citing
"What is Where by Looking: Weakly-Supervised Open-World Phrase-Grounding without Text Inputs"
8 / 8 papers shown
Title
Auto-Vocabulary Semantic Segmentation
Osman Ülger
Maksymilian Kulicki
Yuki M. Asano
Martin R. Oswald
VLM
29
2
0
07 Dec 2023
Spatial-Aware Token for Weakly Supervised Object Localization
Ping Wu
Wei Zhai
Yang Cao
Jiebo Luo
Zhengjun Zha
WSOL
19
9
0
18 Mar 2023
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Junnan Li
Dongxu Li
Caiming Xiong
S. Hoi
MLLM
BDL
VLM
CLIP
382
4,010
0
28 Jan 2022
Text2Mesh: Text-Driven Neural Stylization for Meshes
O. Michel
Roi Bar-On
Richard Liu
Sagie Benaim
Rana Hanocka
CLIP
AI4CE
175
350
0
06 Dec 2021
Mind the Gap: Domain Gap Control for Single Shot Domain Adaptation for Generative Adversarial Networks
Peihao Zhu
Rameen Abdal
John C. Femiani
Peter Wonka
GAN
132
80
0
15 Oct 2021
Learning to Prompt for Vision-Language Models
Kaiyang Zhou
Jingkang Yang
Chen Change Loy
Ziwei Liu
VPVLM
CLIP
VLM
322
2,108
0
02 Sep 2021
HarDNet: A Low Memory Traffic Network
P. Chao
Chao-Yang Kao
Yunxing Ruan
Chien-Hsiang Huang
Y. Lin
188
263
0
03 Sep 2019
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
948
20,214
0
17 Apr 2017
1