ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.03461
  4. Cited By
DeMT: Deformable Mixer Transformer for Multi-Task Learning of Dense
  Prediction

DeMT: Deformable Mixer Transformer for Multi-Task Learning of Dense Prediction

9 January 2023
Yang Yang
Yibo Yang
L. Zhang
    ViT
ArXivPDFHTML

Papers citing "DeMT: Deformable Mixer Transformer for Multi-Task Learning of Dense Prediction"

5 / 5 papers shown
Title
Swiss Army Knife: Synergizing Biases in Knowledge from Vision Foundation Models for Multi-Task Learning
Swiss Army Knife: Synergizing Biases in Knowledge from Vision Foundation Models for Multi-Task Learning
Yuxiang Lu
Shengcao Cao
Yu-xiong Wang
45
1
0
18 Oct 2024
MmAP : Multi-modal Alignment Prompt for Cross-domain Multi-task Learning
MmAP : Multi-modal Alignment Prompt for Cross-domain Multi-task Learning
Yi Xin
Junlong Du
Qiang Wang
Ke Yan
Shouhong Ding
VLM
34
45
0
14 Dec 2023
Token Contrast for Weakly-Supervised Semantic Segmentation
Token Contrast for Weakly-Supervised Semantic Segmentation
Lixiang Ru
Heliang Zheng
Yibing Zhan
Bo Du
ViT
35
86
0
02 Mar 2023
MulT: An End-to-End Multitask Learning Transformer
MulT: An End-to-End Multitask Learning Transformer
Deblina Bhattacharjee
Tong Zhang
Sabine Süsstrunk
Mathieu Salzmann
ViT
34
62
0
17 May 2022
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
  without Convolutions
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
263
3,604
0
24 Feb 2021
1