Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2306.11029
Cited By
RemoteCLIP: A Vision Language Foundation Model for Remote Sensing
19 June 2023
F. Liu
Delong Chen
Zhan-Rong Guan
Xiaocong Zhou
Jiale Zhu
Qiaolin Ye
Liyong Fu
Jun Zhou
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"RemoteCLIP: A Vision Language Foundation Model for Remote Sensing"
14 / 14 papers shown
Title
LISAT: Language-Instructed Segmentation Assistant for Satellite Imagery
Jerome Quenum
Wen-Han Hsieh
Tsung-Han Wu
Ritwik Gupta
Trevor Darrell
David M. Chan
MLLM
VLM
32
54
0
05 May 2025
XeMap: Contextual Referring in Large-Scale Remote Sensing Environments
Y. Li
Lu Si
Y. T. Hou
Chengaung Liu
B. Li
Hongjian Fang
J. Zhang
53
48
0
30 Apr 2025
EcoWikiRS: Learning Ecological Representation of Satellite Images from Weak Supervision with Species Observations and Wikipedia
Valerie Zermatten
J. Castillo-Navarro
Pallavi Jain
D. Tuia
Diego Marcos
55
56
0
28 Apr 2025
SRMF: A Data Augmentation and Multimodal Fusion Approach for Long-Tail UHR Satellite Image Segmentation
Yulong Guo
Zilun Zhang
Yongheng Shang
Tiancheng Zhao
Shuiguang Deng
Yingchun Yang
Jianwei Yin
68
62
0
28 Apr 2025
Lightweight Adapter Learning for More Generalized Remote Sensing Change Detection
Dou Quan
Rufan Zhou
Shuang Wang
Ning Huyan
Dong Zhao
Yunan Li
L. Jiao
56
44
0
28 Apr 2025
What Do Self-Supervised Vision Transformers Learn?
Namuk Park
Wonjae Kim
Byeongho Heo
Taekyung Kim
Sangdoo Yun
SSL
57
49
1
01 May 2023
Scale-MAE: A Scale-Aware Masked Autoencoder for Multiscale Geospatial Representation Learning
Colorado Reed
Ritwik Gupta
Shufan Li
S. Brockman
Christopher Funk
Brian Clipp
Kurt Keutzer
Salvatore Candido
M. Uyttendaele
Trevor Darrell
87
93
0
30 Dec 2022
Revealing the Dark Secrets of Masked Image Modeling
Zhenda Xie
Zigang Geng
Jingcheng Hu
Zheng-Wei Zhang
Han Hu
Yue Cao
VLM
153
105
0
26 May 2022
VLP: A Survey on Vision-Language Pre-training
Feilong Chen
Duzhen Zhang
Minglun Han
Xiuyi Chen
Jing Shi
Shuang Xu
Bo Xu
VLM
57
118
0
18 Feb 2022
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
255
5,353
0
11 Nov 2021
Learning to Prompt for Vision-Language Models
Kaiyang Zhou
Jingkang Yang
Chen Change Loy
Ziwei Liu
VPVLM
CLIP
VLM
311
2,108
0
02 Sep 2021
How Much Can CLIP Benefit Vision-and-Language Tasks?
Sheng Shen
Liunian Harold Li
Hao Tan
Mohit Bansal
Anna Rohrbach
Kai-Wei Chang
Z. Yao
Kurt Keutzer
CLIP
VLM
MLLM
169
342
0
13 Jul 2021
Open-vocabulary Object Detection via Vision and Language Knowledge Distillation
Xiuye Gu
Tsung-Yi Lin
Weicheng Kuo
Yin Cui
VLM
ObjD
187
698
0
28 Apr 2021
Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
Chao Jia
Yinfei Yang
Ye Xia
Yi-Ting Chen
Zarana Parekh
Hieu H. Pham
Quoc V. Le
Yun-hsuan Sung
Zhen Li
Tom Duerig
VLM
CLIP
288
2,875
0
11 Feb 2021
1