Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2312.00360
Cited By
Efficient Multimodal Semantic Segmentation via Dual-Prompt Learning
1 December 2023
Shaohua Dong
Yunhe Feng
Qing Yang
Yan Huang
Dongfang Liu
Heng Fan
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Efficient Multimodal Semantic Segmentation via Dual-Prompt Learning"
8 / 8 papers shown
Title
HDBFormer: Efficient RGB-D Semantic Segmentation with A Heterogeneous Dual-Branch Framework
Shuobin Wei
Zhuang Zhou
Zhengan Lu
Zizhao Yuan
Binghua Su
MDE
42
0
0
18 Apr 2025
DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation
Bo Yin
Xuying Zhang
Zhongyu Li
Li Liu
Ming-Ming Cheng
Qibin Hou
24
39
0
18 Sep 2023
Visual Prompt Tuning for Generative Transfer Learning
Kihyuk Sohn
Yuan Hao
José Lezama
Luisa F. Polanía
Huiwen Chang
Han Zhang
Irfan Essa
Lu Jiang
VPVLM
VLM
51
80
0
03 Oct 2022
AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition
Shoufa Chen
Chongjian Ge
Zhan Tong
Jiangliu Wang
Yibing Song
Jue Wang
Ping Luo
138
631
0
26 May 2022
Omnivore: A Single Model for Many Visual Modalities
Rohit Girdhar
Mannat Singh
Nikhil Ravi
L. V. D. van der Maaten
Armand Joulin
Ishan Misra
209
222
0
20 Jan 2022
Learning to Prompt for Vision-Language Models
Kaiyang Zhou
Jingkang Yang
Chen Change Loy
Ziwei Liu
VPVLM
CLIP
VLM
322
2,108
0
02 Sep 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,784
0
18 Apr 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
263
3,538
0
24 Feb 2021
1