ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.00751
  4. Cited By
Parameter-Efficient Transfer Learning for NLP

Parameter-Efficient Transfer Learning for NLP

2 February 2019
N. Houlsby
A. Giurgiu
Stanislaw Jastrzebski
Bruna Morrone
Quentin de Laroussilhe
Andrea Gesmundo
Mona Attariyan
Sylvain Gelly
ArXivPDFHTML

Papers citing "Parameter-Efficient Transfer Learning for NLP"

50 / 727 papers shown
Title
Towards a Unified View of Parameter-Efficient Transfer Learning
Towards a Unified View of Parameter-Efficient Transfer Learning
Junxian He
Chunting Zhou
Xuezhe Ma
Taylor Berg-Kirkpatrick
Graham Neubig
AAML
21
892
0
08 Oct 2021
Knowledge-Enhanced Evidence Retrieval for Counterargument Generation
Knowledge-Enhanced Evidence Retrieval for Counterargument Generation
Yohan Jo
Haneul Yoo
Jinyeong Bak
Alice H. Oh
Chris Reed
Eduard H. Hovy
RALM
38
12
0
19 Sep 2021
A Conditional Generative Matching Model for Multi-lingual Reply
  Suggestion
A Conditional Generative Matching Model for Multi-lingual Reply Suggestion
Budhaditya Deb
Guoqing Zheng
Milad Shokouhi
Ahmed Hassan Awadallah
26
1
0
15 Sep 2021
Residual Adapters for Parameter-Efficient ASR Adaptation to Atypical and
  Accented Speech
Residual Adapters for Parameter-Efficient ASR Adaptation to Atypical and Accented Speech
Katrin Tomanek
Vicky Zayats
Dirk Padfield
K. Vaillancourt
Fadi Biadsy
51
57
0
14 Sep 2021
Automatically Exposing Problems with Neural Dialog Models
Automatically Exposing Problems with Neural Dialog Models
Dian Yu
Kenji Sagae
23
9
0
14 Sep 2021
Non-Parametric Unsupervised Domain Adaptation for Neural Machine
  Translation
Non-Parametric Unsupervised Domain Adaptation for Neural Machine Translation
Xin Zheng
Zhirui Zhang
Shujian Huang
Boxing Chen
Jun Xie
Weihua Luo
Jiajun Chen
68
25
0
14 Sep 2021
xGQA: Cross-Lingual Visual Question Answering
xGQA: Cross-Lingual Visual Question Answering
Jonas Pfeiffer
Gregor Geigle
Aishwarya Kamath
Jan-Martin O. Steitz
Stefan Roth
Ivan Vulić
Iryna Gurevych
26
56
0
13 Sep 2021
Total Recall: a Customized Continual Learning Method for Neural Semantic
  Parsers
Total Recall: a Customized Continual Learning Method for Neural Semantic Parsers
Zhuang Li
Lizhen Qu
Gholamreza Haffari
CLL
32
15
0
11 Sep 2021
Efficient Test Time Adapter Ensembling for Low-resource Language
  Varieties
Efficient Test Time Adapter Ensembling for Low-resource Language Varieties
Xinyi Wang
Yulia Tsvetkov
Sebastian Ruder
Graham Neubig
30
34
0
10 Sep 2021
Fusing task-oriented and open-domain dialogues in conversational agents
Fusing task-oriented and open-domain dialogues in conversational agents
Tom Young
Frank Xing
Vlad Pandelea
Jinjie Ni
Erik Cambria
24
60
0
09 Sep 2021
Sustainable Modular Debiasing of Language Models
Sustainable Modular Debiasing of Language Models
Anne Lauscher
Tobias Lüken
Goran Glavas
47
120
0
08 Sep 2021
Discrete and Soft Prompting for Multilingual Models
Discrete and Soft Prompting for Multilingual Models
Mengjie Zhao
Hinrich Schütze
LRM
10
71
0
08 Sep 2021
MultiEURLEX -- A multi-lingual and multi-label legal document
  classification dataset for zero-shot cross-lingual transfer
MultiEURLEX -- A multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer
Ilias Chalkidis
Manos Fergadiotis
Ion Androutsopoulos
AILaw
16
106
0
02 Sep 2021
LightNER: A Lightweight Tuning Paradigm for Low-resource NER via
  Pluggable Prompting
LightNER: A Lightweight Tuning Paradigm for Low-resource NER via Pluggable Prompting
Xiang Chen
Lei Li
Shumin Deng
Chuanqi Tan
Changliang Xu
Fei Huang
Luo Si
Huajun Chen
Ningyu Zhang
VLM
34
65
0
31 Aug 2021
Task-Oriented Dialogue System as Natural Language Generation
Task-Oriented Dialogue System as Natural Language Generation
Weizhi Wang
Zhirui Zhang
Junliang Guo
Yinpei Dai
Boxing Chen
Weihua Luo
28
32
0
31 Aug 2021
How Does Adversarial Fine-Tuning Benefit BERT?
How Does Adversarial Fine-Tuning Benefit BERT?
J. Ebrahimi
Hao Yang
Wei Zhang
AAML
18
4
0
31 Aug 2021
Design and Scaffolded Training of an Efficient DNN Operator for Computer
  Vision on the Edge
Design and Scaffolded Training of an Efficient DNN Operator for Computer Vision on the Edge
Vinod Ganesan
Pratyush Kumar
34
2
0
25 Aug 2021
Cross-lingual Transferring of Pre-trained Contextualized Language Models
Cross-lingual Transferring of Pre-trained Contextualized Language Models
Zuchao Li
Kevin Parnow
Hai Zhao
Zhuosheng Zhang
Rui Wang
Masao Utiyama
Eiichiro Sumita
13
8
0
27 Jul 2021
ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback
ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback
Mike Wu
Noah D. Goodman
Chris Piech
Chelsea Finn
16
19
0
23 Jul 2021
A Primer on Pretrained Multilingual Language Models
A Primer on Pretrained Multilingual Language Models
Sumanth Doddapaneni
Gowtham Ramesh
Mitesh M. Khapra
Anoop Kunchukuttan
Pratyush Kumar
LRM
43
73
0
01 Jul 2021
Scientia Potentia Est -- On the Role of Knowledge in Computational
  Argumentation
Scientia Potentia Est -- On the Role of Knowledge in Computational Argumentation
Anne Lauscher
Henning Wachsmuth
Iryna Gurevych
Goran Glavavs
25
31
0
01 Jul 2021
Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with
  Language Models
Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models
Robert L Logan IV
Ivana Balavzević
Eric Wallace
Fabio Petroni
Sameer Singh
Sebastian Riedel
VPVLM
31
207
0
24 Jun 2021
Do Language Models Perform Generalizable Commonsense Inference?
Do Language Models Perform Generalizable Commonsense Inference?
Peifeng Wang
Filip Ilievski
Muhao Chen
Xiang Ren
ReLM
LRM
15
19
0
22 Jun 2021
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based
  Masked Language-models
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models
Elad Ben-Zaken
Shauli Ravfogel
Yoav Goldberg
15
1,148
0
18 Jun 2021
Adversarial Training Helps Transfer Learning via Better Representations
Adversarial Training Helps Transfer Learning via Better Representations
Zhun Deng
Linjun Zhang
Kailas Vodrahalli
Kenji Kawaguchi
James Y. Zou
GAN
36
52
0
18 Jun 2021
Specializing Multilingual Language Models: An Empirical Study
Specializing Multilingual Language Models: An Empirical Study
Ethan C. Chau
Noah A. Smith
25
27
0
16 Jun 2021
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Rabeeh Karimi Mahabadi
James Henderson
Sebastian Ruder
MoE
33
467
0
08 Jun 2021
Signal Transformer: Complex-valued Attention and Meta-Learning for
  Signal Recognition
Signal Transformer: Complex-valued Attention and Meta-Learning for Signal Recognition
Yihong Dong
Ying Peng
Muqiao Yang
Songtao Lu
Qingjiang Shi
38
9
0
05 Jun 2021
MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to
  Limited Data Domains
MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to Limited Data Domains
Yaxing Wang
Abel Gonzalez-Garcia
Chenshen Wu
Luis Herranz
F. Khan
Shangling Jui
Joost van de Weijer
19
6
0
28 Apr 2021
Cross-Attention is All You Need: Adapting Pretrained Transformers for
  Machine Translation
Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Translation
Mozhdeh Gheini
Xiang Ren
Jonathan May
LRM
20
105
0
18 Apr 2021
Relational World Knowledge Representation in Contextual Language Models:
  A Review
Relational World Knowledge Representation in Contextual Language Models: A Review
Tara Safavi
Danai Koutra
KELM
30
51
0
12 Apr 2021
Attribute Alignment: Controlling Text Generation from Pre-trained
  Language Models
Attribute Alignment: Controlling Text Generation from Pre-trained Language Models
Dian Yu
Zhou Yu
Kenji Sagae
13
37
0
20 Mar 2021
Structural Adapters in Pretrained Language Models for AMR-to-text
  Generation
Structural Adapters in Pretrained Language Models for AMR-to-text Generation
Leonardo F. R. Ribeiro
Yue Zhang
Iryna Gurevych
33
69
0
16 Mar 2021
Pretrained Transformers as Universal Computation Engines
Pretrained Transformers as Universal Computation Engines
Kevin Lu
Aditya Grover
Pieter Abbeel
Igor Mordatch
26
217
0
09 Mar 2021
NADI 2021: The Second Nuanced Arabic Dialect Identification Shared Task
NADI 2021: The Second Nuanced Arabic Dialect Identification Shared Task
Muhammad Abdul-Mageed
Chiyu Zhang
AbdelRahim Elmadany
Houda Bouamor
Nizar Habash
13
75
0
04 Mar 2021
Adapting MARBERT for Improved Arabic Dialect Identification: Submission
  to the NADI 2021 Shared Task
Adapting MARBERT for Improved Arabic Dialect Identification: Submission to the NADI 2021 Shared Task
Badr AlKhamissi
Mohamed Gabr
Muhammad N. ElNokrashy
Khaled Essam
18
17
0
01 Mar 2021
Self-Tuning for Data-Efficient Deep Learning
Self-Tuning for Data-Efficient Deep Learning
Ximei Wang
Jing Gao
Mingsheng Long
Jianmin Wang
BDL
22
69
0
25 Feb 2021
Meta-Transfer Learning for Low-Resource Abstractive Summarization
Meta-Transfer Learning for Low-Resource Abstractive Summarization
Yi-Syuan Chen
Hong-Han Shuai
CLL
OffRL
40
38
0
18 Feb 2021
Combining pre-trained language models and structured knowledge
Combining pre-trained language models and structured knowledge
Pedro Colon-Hernandez
Catherine Havasi
Jason B. Alonso
Matthew Huggins
C. Breazeal
KELM
22
48
0
28 Jan 2021
Trankit: A Light-Weight Transformer-based Toolkit for Multilingual
  Natural Language Processing
Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language Processing
Minh Nguyen
Viet Dac Lai
Amir Pouran Ben Veyseh
Thien Huu Nguyen
44
132
0
09 Jan 2021
Learning to Generate Task-Specific Adapters from Task Description
Learning to Generate Task-Specific Adapters from Task Description
Qinyuan Ye
Xiang Ren
107
29
0
02 Jan 2021
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Xiang Lisa Li
Percy Liang
20
4,073
0
01 Jan 2021
WARP: Word-level Adversarial ReProgramming
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
254
342
0
01 Jan 2021
UNKs Everywhere: Adapting Multilingual Language Models to New Scripts
UNKs Everywhere: Adapting Multilingual Language Models to New Scripts
Jonas Pfeiffer
Ivan Vulić
Iryna Gurevych
Sebastian Ruder
22
126
0
31 Dec 2020
Parameter-Efficient Transfer Learning with Diff Pruning
Parameter-Efficient Transfer Learning with Diff Pruning
Demi Guo
Alexander M. Rush
Yoon Kim
9
383
0
14 Dec 2020
Orthogonal Language and Task Adapters in Zero-Shot Cross-Lingual
  Transfer
Orthogonal Language and Task Adapters in Zero-Shot Cross-Lingual Transfer
M. Vidoni
Ivan Vulić
Goran Glavas
31
27
0
11 Dec 2020
Efficient Estimation of Influence of a Training Instance
Efficient Estimation of Influence of a Training Instance
Sosuke Kobayashi
Sho Yokoi
Jun Suzuki
Kentaro Inui
TDI
27
15
0
08 Dec 2020
Modifying Memories in Transformer Models
Modifying Memories in Transformer Models
Chen Zhu
A. S. Rawat
Manzil Zaheer
Srinadh Bhojanapalli
Daliang Li
Felix X. Yu
Sanjiv Kumar
KELM
11
190
0
01 Dec 2020
Emergent Communication Pretraining for Few-Shot Machine Translation
Emergent Communication Pretraining for Few-Shot Machine Translation
Yaoyiran Li
E. Ponti
Ivan Vulić
Anna Korhonen
23
19
0
02 Nov 2020
Target Word Masking for Location Metonymy Resolution
Target Word Masking for Location Metonymy Resolution
Haonan Li
Maria Vasardani
Martin Tomko
Timothy Baldwin
6
11
0
30 Oct 2020
Previous
123...131415
Next