ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.08303
  4. Cited By
MulT: An End-to-End Multitask Learning Transformer

MulT: An End-to-End Multitask Learning Transformer

17 May 2022
Deblina Bhattacharjee
Tong Zhang
Sabine Süsstrunk
Mathieu Salzmann
    ViT
ArXivPDFHTML

Papers citing "MulT: An End-to-End Multitask Learning Transformer"

9 / 9 papers shown
Title
SGW-based Multi-Task Learning in Vision Tasks
SGW-based Multi-Task Learning in Vision Tasks
Ruiyuan Zhang
Yuyao Chen
Yuchi Huo
Jiaxiang Liu
Dianbing Xi
Jie Liu
Chao Wu
20
0
0
03 Oct 2024
AutoTask: Task Aware Multi-Faceted Single Model for Multi-Task Ads
  Relevance
AutoTask: Task Aware Multi-Faceted Single Model for Multi-Task Ads Relevance
Shouchang Guo
Sonam Damani
Keng-hao Chang
16
0
0
09 Jul 2024
4M: Massively Multimodal Masked Modeling
4M: Massively Multimodal Masked Modeling
David Mizrahi
Roman Bachmann
Ouguzhan Fatih Kar
Teresa Yeo
Mingfei Gao
Afshin Dehghan
Amir Zamir
MLLM
23
62
0
11 Dec 2023
PolyMaX: General Dense Prediction with Mask Transformer
PolyMaX: General Dense Prediction with Mask Transformer
Xuan S. Yang
Liangzhe Yuan
Kimberly Wilber
Astuti Sharma
Xiuye Gu
...
Stephanie Debats
Huisheng Wang
Hartwig Adam
Mikhail Sirotenko
Liang-Chieh Chen
18
14
0
09 Nov 2023
Multi-Similarity Contrastive Learning
Multi-Similarity Contrastive Learning
Emily Mu
John Guttag
Maggie Makar
SSL
18
2
0
06 Jul 2023
InvPT++: Inverted Pyramid Multi-Task Transformer for Visual Scene
  Understanding
InvPT++: Inverted Pyramid Multi-Task Transformer for Visual Scene Understanding
Hanrong Ye
Dan Xu
ViT
8
10
0
08 Jun 2023
MTLSegFormer: Multi-task Learning with Transformers for Semantic
  Segmentation in Precision Agriculture
MTLSegFormer: Multi-task Learning with Transformers for Semantic Segmentation in Precision Agriculture
D. Gonçalves
J. M. Junior
Pedro Zamboni
H. Pistori
Jonathan Li
Keiller Nogueira
W. Gonçalves
19
5
0
04 May 2023
Are Transformers More Robust Than CNNs?
Are Transformers More Robust Than CNNs?
Yutong Bai
Jieru Mei
Alan Yuille
Cihang Xie
ViT
AAML
167
256
0
10 Nov 2021
A Decomposable Attention Model for Natural Language Inference
A Decomposable Attention Model for Natural Language Inference
Ankur P. Parikh
Oscar Täckström
Dipanjan Das
Jakob Uszkoreit
190
1,358
0
06 Jun 2016
1