Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2104.04386
Cited By
Look Before You Leap: Learning Landmark Features for One-Stage Visual Grounding
9 April 2021
Binbin Huang
Dongze Lian
Weixin Luo
Shenghua Gao
ObjD
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Look Before You Leap: Learning Landmark Features for One-Stage Visual Grounding"
9 / 9 papers shown
Title
Language-Guided Diffusion Model for Visual Grounding
Sijia Chen
Baochun Li
27
5
0
18 Aug 2023
Referring Camouflaged Object Detection
Xuying Zhang
Bo Yin
Zheng Lin
Qibin Hou
Deng-Ping Fan
Ming-Ming Cheng
38
17
0
13 Jun 2023
RSVG: Exploring Data and Models for Visual Grounding on Remote Sensing Data
Yangfan Zhan
Zhitong Xiong
Yuan. Yuan
66
106
0
23 Oct 2022
Dynamic MDETR: A Dynamic Multimodal Transformer Decoder for Visual Grounding
Fengyuan Shi
Ruopeng Gao
Weilin Huang
Limin Wang
17
23
0
28 Sep 2022
RefCrowd: Grounding the Target in Crowd with Referring Expressions
Heqian Qiu
Hongliang Li
Taijin Zhao
Lanxiao Wang
Qingbo Wu
Fanman Meng
ObjD
19
6
0
16 Jun 2022
Improving Visual Grounding with Visual-Linguistic Verification and Iterative Reasoning
Li Yang
Yan Xu
Chunfen Yuan
Wei Liu
Bing Li
Weiming Hu
ObjD
34
113
0
30 Apr 2022
TubeDETR: Spatio-Temporal Video Grounding with Transformers
Antoine Yang
Antoine Miech
Josef Sivic
Ivan Laptev
Cordelia Schmid
ViT
28
94
0
30 Mar 2022
Unpaired Referring Expression Grounding via Bidirectional Cross-Modal Matching
Hengcan Shi
Munawar Hayat
Jianfei Cai
ObjD
16
10
0
18 Jan 2022
A Real-Time Cross-modality Correlation Filtering Method for Referring Expression Comprehension
Yue Liao
Si Liu
Guanbin Li
Fei-Yue Wang
Yanjie Chen
Chao Qian
Bo-wen Li
ObjD
62
199
0
16 Sep 2019
1