ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.12372
  4. Cited By
Vision Transformer Adapters for Generalizable Multitask Learning

Vision Transformer Adapters for Generalizable Multitask Learning

23 August 2023
Deblina Bhattacharjee
Sabine Süsstrunk
Mathieu Salzmann
    ViT
ArXivPDFHTML

Papers citing "Vision Transformer Adapters for Generalizable Multitask Learning"

4 / 4 papers shown
Title
Enhancing Monocular Depth Estimation with Multi-Source Auxiliary Tasks
Enhancing Monocular Depth Estimation with Multi-Source Auxiliary Tasks
Alessio Quercia
Erenus Yildiz
Zhuo Cao
Kai Krajsek
Abigail Morrison
Ira Assent
Hanno Scharr
51
0
0
22 Jan 2025
PolyMaX: General Dense Prediction with Mask Transformer
PolyMaX: General Dense Prediction with Mask Transformer
Xuan S. Yang
Liangzhe Yuan
Kimberly Wilber
Astuti Sharma
Xiuye Gu
...
Stephanie Debats
Huisheng Wang
Hartwig Adam
Mikhail Sirotenko
Liang-Chieh Chen
26
14
0
09 Nov 2023
MulT: An End-to-End Multitask Learning Transformer
MulT: An End-to-End Multitask Learning Transformer
Deblina Bhattacharjee
Tong Zhang
Sabine Süsstrunk
Mathieu Salzmann
ViT
29
62
0
17 May 2022
Efficiently Identifying Task Groupings for Multi-Task Learning
Efficiently Identifying Task Groupings for Multi-Task Learning
Christopher Fifty
Ehsan Amid
Zhe Zhao
Tianhe Yu
Rohan Anil
Chelsea Finn
201
238
1
10 Sep 2021
1