Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2301.03461
Cited By
DeMT: Deformable Mixer Transformer for Multi-Task Learning of Dense Prediction
9 January 2023
Yang Yang
Yibo Yang
L. Zhang
ViT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"DeMT: Deformable Mixer Transformer for Multi-Task Learning of Dense Prediction"
5 / 5 papers shown
Title
Swiss Army Knife: Synergizing Biases in Knowledge from Vision Foundation Models for Multi-Task Learning
Yuxiang Lu
Shengcao Cao
Yu-xiong Wang
45
1
0
18 Oct 2024
MmAP : Multi-modal Alignment Prompt for Cross-domain Multi-task Learning
Yi Xin
Junlong Du
Qiang Wang
Ke Yan
Shouhong Ding
VLM
34
45
0
14 Dec 2023
Token Contrast for Weakly-Supervised Semantic Segmentation
Lixiang Ru
Heliang Zheng
Yibing Zhan
Bo Du
ViT
35
86
0
02 Mar 2023
MulT: An End-to-End Multitask Learning Transformer
Deblina Bhattacharjee
Tong Zhang
Sabine Süsstrunk
Mathieu Salzmann
ViT
34
62
0
17 May 2022
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
263
3,604
0
24 Feb 2021
1