ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.09685
  4. Cited By
LoRA: Low-Rank Adaptation of Large Language Models

LoRA: Low-Rank Adaptation of Large Language Models

17 June 2021
J. E. Hu
Yelong Shen
Phillip Wallis
Zeyuan Allen-Zhu
Yuanzhi Li
Shean Wang
Lu Wang
Weizhu Chen
    OffRL
    AI4TS
    AI4CE
    ALM
    AIMat
ArXivPDFHTML

Papers citing "LoRA: Low-Rank Adaptation of Large Language Models"

27 / 1,627 papers shown
Title
A Contrastive Cross-Channel Data Augmentation Framework for Aspect-based
  Sentiment Analysis
A Contrastive Cross-Channel Data Augmentation Framework for Aspect-based Sentiment Analysis
Bing Wang
Liang Ding
Qihuang Zhong
Ximing Li
Dacheng Tao
27
32
0
16 Apr 2022
Impossible Triangle: What's Next for Pre-trained Language Models?
Impossible Triangle: What's Next for Pre-trained Language Models?
Chenguang Zhu
Michael Zeng
18
1
0
13 Apr 2022
DualPrompt: Complementary Prompting for Rehearsal-free Continual
  Learning
DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning
Zifeng Wang
Zizhao Zhang
Sayna Ebrahimi
Ruoxi Sun
Han Zhang
...
Xiaoqi Ren
Guolong Su
Vincent Perot
Jennifer Dy
Tomas Pfister
CLL
VLM
VPVLM
28
455
0
10 Apr 2022
Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual
  Retrieval
Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual Retrieval
Robert Litschko
Ivan Vulić
Goran Glavavs
LRM
29
13
0
05 Apr 2022
Parameter-efficient Model Adaptation for Vision Transformers
Parameter-efficient Model Adaptation for Vision Transformers
Xuehai He
Chunyuan Li
Pengchuan Zhang
Jianwei Yang
X. Wang
28
84
0
29 Mar 2022
Hyperdecoders: Instance-specific decoders for multi-task NLP
Hyperdecoders: Instance-specific decoders for multi-task NLP
Hamish Ivison
Matthew E. Peters
AI4CE
26
20
0
15 Mar 2022
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for
  Pre-trained Language Models
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
Ning Ding
Yujia Qin
Guang Yang
Fu Wei
Zonghan Yang
...
Jianfei Chen
Yang Liu
Jie Tang
Juan Li
Maosong Sun
17
196
0
14 Mar 2022
Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models
Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models
Shengnan An
Yifei Li
Zeqi Lin
Qian Liu
Bei Chen
Qiang Fu
Weizhu Chen
Nanning Zheng
Jian-Guang Lou
VLM
AAML
34
39
0
07 Mar 2022
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Revisiting Parameter-Efficient Tuning: Are We Really There Yet?
Guanzheng Chen
Fangyu Liu
Zaiqiao Meng
Shangsong Liang
26
88
0
16 Feb 2022
EdgeFormer: A Parameter-Efficient Transformer for On-Device Seq2seq
  Generation
EdgeFormer: A Parameter-Efficient Transformer for On-Device Seq2seq Generation
Tao Ge
Si-Qing Chen
Furu Wei
MoE
22
21
0
16 Feb 2022
CLIP-TD: CLIP Targeted Distillation for Vision-Language Tasks
CLIP-TD: CLIP Targeted Distillation for Vision-Language Tasks
Zhecan Wang
Noel Codella
Yen-Chun Chen
Luowei Zhou
Jianwei Yang
Xiyang Dai
Bin Xiao
Haoxuan You
Shih-Fu Chang
Lu Yuan
CLIP
VLM
22
39
0
15 Jan 2022
Black-Box Tuning for Language-Model-as-a-Service
Black-Box Tuning for Language-Model-as-a-Service
Tianxiang Sun
Yunfan Shao
Hong Qian
Xuanjing Huang
Xipeng Qiu
VLM
50
255
0
10 Jan 2022
Efficient Hierarchical Domain Adaptation for Pretrained Language Models
Efficient Hierarchical Domain Adaptation for Pretrained Language Models
Alexandra Chronopoulou
Matthew E. Peters
Jesse Dodge
23
42
0
16 Dec 2021
Training Multi-Layer Over-Parametrized Neural Network in Subquadratic
  Time
Training Multi-Layer Over-Parametrized Neural Network in Subquadratic Time
Zhao-quan Song
Licheng Zhang
Ruizhe Zhang
23
63
0
14 Dec 2021
VL-Adapter: Parameter-Efficient Transfer Learning for
  Vision-and-Language Tasks
VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks
Yi-Lin Sung
Jaemin Cho
Mohit Bansal
VLM
VPVLM
29
342
0
13 Dec 2021
Pruning Pretrained Encoders with a Multitask Objective
Pruning Pretrained Encoders with a Multitask Objective
Patrick Xia
Richard Shin
42
0
0
10 Dec 2021
Improving Differentially Private SGD via Randomly Sparsified Gradients
Improving Differentially Private SGD via Randomly Sparsified Gradients
Junyi Zhu
Matthew B. Blaschko
21
5
0
01 Dec 2021
OpenPrompt: An Open-source Framework for Prompt-learning
OpenPrompt: An Open-source Framework for Prompt-learning
Ning Ding
Shengding Hu
Weilin Zhao
Yulin Chen
Zhiyuan Liu
Haitao Zheng
Maosong Sun
VLM
LLMAG
23
284
0
03 Nov 2021
Semi-Siamese Bi-encoder Neural Ranking Model Using Lightweight
  Fine-Tuning
Semi-Siamese Bi-encoder Neural Ranking Model Using Lightweight Fine-Tuning
Euna Jung
Jaekeol Choi
Wonjong Rhee
17
13
0
28 Oct 2021
Differentially Private Fine-tuning of Language Models
Differentially Private Fine-tuning of Language Models
Da Yu
Saurabh Naik
A. Backurs
Sivakanth Gopi
Huseyin A. Inan
...
Y. Lee
Andre Manoel
Lukas Wutschitz
Sergey Yekhanin
Huishuai Zhang
134
346
0
13 Oct 2021
Towards a Unified View of Parameter-Efficient Transfer Learning
Towards a Unified View of Parameter-Efficient Transfer Learning
Junxian He
Chunting Zhou
Xuezhe Ma
Taylor Berg-Kirkpatrick
Graham Neubig
AAML
23
892
0
08 Oct 2021
Towards Continual Knowledge Learning of Language Models
Towards Continual Knowledge Learning of Language Models
Joel Jang
Seonghyeon Ye
Sohee Yang
Joongbo Shin
Janghoon Han
Gyeonghun Kim
Stanley Jungkyu Choi
Minjoon Seo
CLL
KELM
222
150
0
07 Oct 2021
Initialization and Regularization of Factorized Neural Layers
Initialization and Regularization of Factorized Neural Layers
M. Khodak
Neil A. Tenenholtz
Lester W. Mackey
Nicolò Fusi
63
56
0
03 May 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,844
0
18 Apr 2021
WARP: Word-level Adversarial ReProgramming
WARP: Word-level Adversarial ReProgramming
Karen Hambardzumyan
Hrant Khachatrian
Jonathan May
AAML
254
342
0
01 Jan 2021
Megatron-LM: Training Multi-Billion Parameter Language Models Using
  Model Parallelism
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
245
1,817
0
17 Sep 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,956
0
20 Apr 2018
Previous
123...313233