ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.00247
  4. Cited By
AdapterFusion: Non-Destructive Task Composition for Transfer Learning

AdapterFusion: Non-Destructive Task Composition for Transfer Learning

1 May 2020
Jonas Pfeiffer
Aishwarya Kamath
Andreas Rucklé
Kyunghyun Cho
Iryna Gurevych
    CLL
    MoMe
ArXivPDFHTML

Papers citing "AdapterFusion: Non-Destructive Task Composition for Transfer Learning"

45 / 145 papers shown
Title
FairDistillation: Mitigating Stereotyping in Language Models
FairDistillation: Mitigating Stereotyping in Language Models
Pieter Delobelle
Bettina Berendt
20
8
0
10 Jul 2022
GAAMA 2.0: An Integrated System that Answers Boolean and Extractive
  Questions
GAAMA 2.0: An Integrated System that Answers Boolean and Extractive Questions
Scott McCarley
Mihaela A. Bornea
Sara Rosenthal
Anthony Ferritto
Md Arafat Sultan
Avirup Sil
Radu Florian
14
1
0
16 Jun 2022
Enhancing Continual Learning with Global Prototypes: Counteracting
  Negative Representation Drift
Enhancing Continual Learning with Global Prototypes: Counteracting Negative Representation Drift
Xueying Bai
Jinghuan Shang
Yifan Sun
Niranjan Balasubramanian
CLL
30
1
0
24 May 2022
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures
  of Soft Prompts
ATTEMPT: Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts
Akari Asai
Mohammadreza Salehi
Matthew E. Peters
Hannaneh Hajishirzi
120
100
0
24 May 2022
When does Parameter-Efficient Transfer Learning Work for Machine
  Translation?
When does Parameter-Efficient Transfer Learning Work for Machine Translation?
A. Ustun
Asa Cooper Stickland
35
7
0
23 May 2022
Lifting the Curse of Multilinguality by Pre-training Modular
  Transformers
Lifting the Curse of Multilinguality by Pre-training Modular Transformers
Jonas Pfeiffer
Naman Goyal
Xi Victoria Lin
Xian Li
James Cross
Sebastian Riedel
Mikel Artetxe
LRM
40
138
0
12 May 2022
AdapterBias: Parameter-efficient Token-dependent Representation Shift
  for Adapters in NLP Tasks
AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks
Chin-Lun Fu
Zih-Ching Chen
Yun-Ru Lee
Hung-yi Lee
28
44
0
30 Apr 2022
DualPrompt: Complementary Prompting for Rehearsal-free Continual
  Learning
DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning
Zifeng Wang
Zizhao Zhang
Sayna Ebrahimi
Ruoxi Sun
Han Zhang
...
Xiaoqi Ren
Guolong Su
Vincent Perot
Jennifer Dy
Tomas Pfister
CLL
VLM
VPVLM
28
455
0
10 Apr 2022
IDPG: An Instance-Dependent Prompt Generation Method
IDPG: An Instance-Dependent Prompt Generation Method
Zhuofeng Wu
Sinong Wang
Jiatao Gu
Rui Hou
Yuxiao Dong
V. Vydiswaran
Hao Ma
VLM
30
58
0
09 Apr 2022
PERFECT: Prompt-free and Efficient Few-shot Learning with Language
  Models
PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models
Rabeeh Karimi Mahabadi
Luke Zettlemoyer
James Henderson
Marzieh Saeidi
Lambert Mathias
Ves Stoyanov
Majid Yazdani
VLM
26
69
0
03 Apr 2022
Parameter-efficient Model Adaptation for Vision Transformers
Parameter-efficient Model Adaptation for Vision Transformers
Xuehai He
Chunyuan Li
Pengchuan Zhang
Jianwei Yang
X. Wang
28
84
0
29 Mar 2022
Continual Sequence Generation with Adaptive Compositional Modules
Continual Sequence Generation with Adaptive Compositional Modules
Yanzhe Zhang
Xuezhi Wang
Diyi Yang
KELM
CLL
37
40
0
20 Mar 2022
Hyperdecoders: Instance-specific decoders for multi-task NLP
Hyperdecoders: Instance-specific decoders for multi-task NLP
Hamish Ivison
Matthew E. Peters
AI4CE
24
20
0
15 Mar 2022
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for
  Pre-trained Language Models
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Ning Ding
Yujia Qin
Guang Yang
Fu Wei
Zonghan Yang
...
Jianfei Chen
Yang Liu
Jie Tang
Juan Li
Maosong Sun
15
196
0
14 Mar 2022
Memory Efficient Continual Learning with Transformers
Memory Efficient Continual Learning with Transformers
B. Ermiş
Giovanni Zappella
Martin Wistuba
Aditya Rawal
Cédric Archambeau
CLL
21
42
0
09 Mar 2022
Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models
Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models
Shengnan An
Yifei Li
Zeqi Lin
Qian Liu
Bei Chen
Qiang Fu
Weizhu Chen
Nanning Zheng
Jian-Guang Lou
VLM
AAML
34
39
0
07 Mar 2022
$\mathcal{Y}$-Tuning: An Efficient Tuning Paradigm for Large-Scale
  Pre-Trained Models via Label Representation Learning
Y\mathcal{Y}Y-Tuning: An Efficient Tuning Paradigm for Large-Scale Pre-Trained Models via Label Representation Learning
Yitao Liu
Chen An
Xipeng Qiu
27
17
0
20 Feb 2022
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Guanzheng Chen
Fangyu Liu
Zaiqiao Meng
Shangsong Liang
26
88
0
16 Feb 2022
Cascading Adaptors to Leverage English Data to Improve Performance of
  Question Answering for Low-Resource Languages
Cascading Adaptors to Leverage English Data to Improve Performance of Question Answering for Low-Resource Languages
Hariom A. Pandya
Bhavik Ardeshna
Brijesh S. Bhatt
16
6
0
18 Dec 2021
Achieving Forgetting Prevention and Knowledge Transfer in Continual
  Learning
Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning
Zixuan Ke
Bing-Quan Liu
Nianzu Ma
Hu Xu
Lei Shu
CLL
181
123
0
05 Dec 2021
Many Heads but One Brain: Fusion Brain -- a Competition and a Single
  Multimodal Multitask Architecture
Many Heads but One Brain: Fusion Brain -- a Competition and a Single Multimodal Multitask Architecture
Daria Bakshandaeva
Denis Dimitrov
V.Ya. Arkhipkin
Alex Shonenkov
M. Potanin
...
Mikhail Martynov
Anton Voronov
Vera Davydova
E. Tutubalina
Aleksandr Petiushko
33
0
0
22 Nov 2021
Leveraging Sentiment Analysis Knowledge to Solve Emotion Detection Tasks
Leveraging Sentiment Analysis Knowledge to Solve Emotion Detection Tasks
Maude Nguyen-The
Guillaume-Alexandre Bilodeau
Jan Rockemann
8
4
0
05 Nov 2021
Differentially Private Fine-tuning of Language Models
Differentially Private Fine-tuning of Language Models
Da Yu
Saurabh Naik
A. Backurs
Sivakanth Gopi
Huseyin A. Inan
...
Y. Lee
Andre Manoel
Lukas Wutschitz
Sergey Yekhanin
Huishuai Zhang
134
346
0
13 Oct 2021
Towards a Unified View of Parameter-Efficient Transfer Learning
Towards a Unified View of Parameter-Efficient Transfer Learning
Junxian He
Chunting Zhou
Xuezhe Ma
Taylor Berg-Kirkpatrick
Graham Neubig
AAML
21
892
0
08 Oct 2021
xGQA: Cross-Lingual Visual Question Answering
xGQA: Cross-Lingual Visual Question Answering
Jonas Pfeiffer
Gregor Geigle
Aishwarya Kamath
Jan-Martin O. Steitz
Stefan Roth
Ivan Vulić
Iryna Gurevych
26
56
0
13 Sep 2021
Total Recall: a Customized Continual Learning Method for Neural Semantic
  Parsers
Total Recall: a Customized Continual Learning Method for Neural Semantic Parsers
Zhuang Li
Lizhen Qu
Gholamreza Haffari
CLL
32
15
0
11 Sep 2021
Efficient Test Time Adapter Ensembling for Low-resource Language
  Varieties
Efficient Test Time Adapter Ensembling for Low-resource Language Varieties
Xinyi Wang
Yulia Tsvetkov
Sebastian Ruder
Graham Neubig
30
34
0
10 Sep 2021
MetaXT: Meta Cross-Task Transfer between Disparate Label Spaces
MetaXT: Meta Cross-Task Transfer between Disparate Label Spaces
Srinagesh Sharma
Guoqing Zheng
Ahmed Hassan Awadallah
19
1
0
09 Sep 2021
Sustainable Modular Debiasing of Language Models
Sustainable Modular Debiasing of Language Models
Anne Lauscher
Tobias Lüken
Goran Glavas
47
120
0
08 Sep 2021
MultiEURLEX -- A multi-lingual and multi-label legal document
  classification dataset for zero-shot cross-lingual transfer
MultiEURLEX -- A multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer
Ilias Chalkidis
Manos Fergadiotis
Ion Androutsopoulos
AILaw
16
106
0
02 Sep 2021
Specializing Multilingual Language Models: An Empirical Study
Specializing Multilingual Language Models: An Empirical Study
Ethan C. Chau
Noah A. Smith
25
27
0
16 Jun 2021
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Rabeeh Karimi Mahabadi
James Henderson
Sebastian Ruder
MoE
33
467
0
08 Jun 2021
MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to
  Limited Data Domains
MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to Limited Data Domains
Yaxing Wang
Abel Gonzalez-Garcia
Chenshen Wu
Luis Herranz
F. Khan
Shangling Jui
Joost van de Weijer
19
6
0
28 Apr 2021
Structural Adapters in Pretrained Language Models for AMR-to-text
  Generation
Structural Adapters in Pretrained Language Models for AMR-to-text Generation
Leonardo F. R. Ribeiro
Yue Zhang
Iryna Gurevych
33
69
0
16 Mar 2021
Adapting MARBERT for Improved Arabic Dialect Identification: Submission
  to the NADI 2021 Shared Task
Adapting MARBERT for Improved Arabic Dialect Identification: Submission to the NADI 2021 Shared Task
Badr AlKhamissi
Mohamed Gabr
Muhammad N. ElNokrashy
Khaled Essam
16
17
0
01 Mar 2021
Trankit: A Light-Weight Transformer-based Toolkit for Multilingual
  Natural Language Processing
Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language Processing
Minh Nguyen
Viet Dac Lai
Amir Pouran Ben Veyseh
Thien Huu Nguyen
44
132
0
09 Jan 2021
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Xiang Lisa Li
Percy Liang
20
4,073
0
01 Jan 2021
WARP: Word-level Adversarial ReProgramming
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
254
342
0
01 Jan 2021
UNKs Everywhere: Adapting Multilingual Language Models to New Scripts
UNKs Everywhere: Adapting Multilingual Language Models to New Scripts
Jonas Pfeiffer
Ivan Vulić
Iryna Gurevych
Sebastian Ruder
22
126
0
31 Dec 2020
Parameter-Efficient Transfer Learning with Diff Pruning
Parameter-Efficient Transfer Learning with Diff Pruning
Demi Guo
Alexander M. Rush
Yoon Kim
9
383
0
14 Dec 2020
Orthogonal Language and Task Adapters in Zero-Shot Cross-Lingual
  Transfer
Orthogonal Language and Task Adapters in Zero-Shot Cross-Lingual Transfer
M. Vidoni
Ivan Vulić
Goran Glavas
31
27
0
11 Dec 2020
AdapterDrop: On the Efficiency of Adapters in Transformers
AdapterDrop: On the Efficiency of Adapters in Transformers
Andreas Rucklé
Gregor Geigle
Max Glockner
Tilman Beck
Jonas Pfeiffer
Nils Reimers
Iryna Gurevych
46
254
0
22 Oct 2020
MultiCQA: Zero-Shot Transfer of Self-Supervised Text Matching Models on
  a Massive Scale
MultiCQA: Zero-Shot Transfer of Self-Supervised Text Matching Models on a Massive Scale
Andreas Rucklé
Jonas Pfeiffer
Iryna Gurevych
14
37
0
02 Oct 2020
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning
  in NLP Using Fewer Parameters & Less Data
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data
Jonathan Pilault
Amine Elhattami
C. Pal
CLL
MoE
19
89
0
19 Sep 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,950
0
20 Apr 2018
Previous
123