ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.09685
  4. Cited By
LoRA: Low-Rank Adaptation of Large Language Models
v1v2 (latest)

LoRA: Low-Rank Adaptation of Large Language Models

International Conference on Learning Representations (ICLR), 2021
17 June 2021
J. E. Hu
Yelong Shen
Phillip Wallis
Zeyuan Allen-Zhu
Yuanzhi Li
Shean Wang
Lu Wang
Weizhu Chen
    OffRLAI4TSAI4CEALMAIMat
ArXiv (abs)PDFHTMLHuggingFace (49 upvotes)Github (11998★)

Papers citing "LoRA: Low-Rank Adaptation of Large Language Models"

50 / 8,614 papers shown
On the Effectiveness of Parameter-Efficient Fine-Tuning
On the Effectiveness of Parameter-Efficient Fine-TuningAAAI Conference on Artificial Intelligence (AAAI), 2022
Z. Fu
Haoran Yang
Anthony Man-Cho So
Wai Lam
Lidong Bing
Nigel Collier
217
206
0
28 Nov 2022
A Comprehensive Survey on Enterprise Financial Risk Analysis from Big Data Perspective
A Comprehensive Survey on Enterprise Financial Risk Analysis from Big Data Perspective
Yu Zhao
Huaming Du
Qing Li
Fuzhen Zhuang
Ji Liu
Gang Kou
Gang Kou
553
4
0
28 Nov 2022
Self-Destructing Models: Increasing the Costs of Harmful Dual Uses of
  Foundation Models
Self-Destructing Models: Increasing the Costs of Harmful Dual Uses of Foundation ModelsAAAI/ACM Conference on AI, Ethics, and Society (AIES), 2022
Peter Henderson
E. Mitchell
Christopher D. Manning
Dan Jurafsky
Chelsea Finn
241
62
0
27 Nov 2022
MAEDAY: MAE for few and zero shot AnomalY-Detection
MAEDAY: MAE for few and zero shot AnomalY-DetectionComputer Vision and Image Understanding (CVIU), 2022
Eli Schwartz
Assaf Arbelle
Leonid Karlinsky
Sivan Harary
Florian Scheidegger
Sivan Doveh
Raja Giryes
ViTUQCV
217
58
0
25 Nov 2022
HyperTuning: Toward Adapting Large Language Models without
  Back-propagation
HyperTuning: Toward Adapting Large Language Models without Back-propagationInternational Conference on Machine Learning (ICML), 2022
Jason Phang
Yi Mao
Pengcheng He
Weizhu Chen
231
39
0
22 Nov 2022
Linear Interpolation In Parameter Space is Good Enough for Fine-Tuned
  Language Models
Linear Interpolation In Parameter Space is Good Enough for Fine-Tuned Language Models
Mark Rofin
Nikita Balagansky
Daniil Gavrilov
MoMeKELM
163
7
0
22 Nov 2022
Teaching Structured Vision&Language Concepts to Vision&Language Models
Teaching Structured Vision&Language Concepts to Vision&Language ModelsComputer Vision and Pattern Recognition (CVPR), 2022
Sivan Doveh
Assaf Arbelle
Sivan Harary
Yikang Shen
Roei Herzig
...
Donghyun Kim
Raja Giryes
Rogerio Feris
S. Ullman
Leonid Karlinsky
VLMCoGe
342
91
0
21 Nov 2022
Multitask Vision-Language Prompt Tuning
Multitask Vision-Language Prompt TuningIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2022
Sheng Shen
Shijia Yang
Tianjun Zhang
Bohan Zhai
Joseph E. Gonzalez
Kurt Keutzer
Trevor Darrell
VLMVPVLM
288
77
0
21 Nov 2022
AF Adapter: Continual Pretraining for Building Chinese Biomedical
  Language Model
AF Adapter: Continual Pretraining for Building Chinese Biomedical Language ModelIEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2022
Yongyu Yan
Kui Xue
Xiaoming Shi
Qi Ye
Jingping Liu
Tong Ruan
CLL
178
4
0
21 Nov 2022
Aging with GRACE: Lifelong Model Editing with Discrete Key-Value
  Adaptors
Aging with GRACE: Lifelong Model Editing with Discrete Key-Value AdaptorsNeural Information Processing Systems (NeurIPS), 2022
Thomas Hartvigsen
S. Sankaranarayanan
Hamid Palangi
Yoon Kim
Marzyeh Ghassemi
KELM
642
237
0
20 Nov 2022
ConStruct-VL: Data-Free Continual Structured VL Concepts Learning
ConStruct-VL: Data-Free Continual Structured VL Concepts LearningComputer Vision and Pattern Recognition (CVPR), 2022
James Smith
Paola Cascante-Bonilla
Assaf Arbelle
Donghyun Kim
Yikang Shen
David D. Cox
Diyi Yang
Z. Kira
Rogerio Feris
Leonid Karlinsky
VLM
289
25
0
17 Nov 2022
Structured Pruning Adapters
Structured Pruning AdaptersPattern Recognition (Pattern Recogn.), 2022
Lukas Hedegaard
Aman Alok
Juby Jose
Alexandros Iosifidis
281
15
0
17 Nov 2022
CSCD-NS: a Chinese Spelling Check Dataset for Native Speakers
CSCD-NS: a Chinese Spelling Check Dataset for Native SpeakersAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Yong Hu
Fandong Meng
Jie Zhou
269
17
0
16 Nov 2022
FedTune: A Deep Dive into Efficient Federated Fine-Tuning with
  Pre-trained Transformers
FedTune: A Deep Dive into Efficient Federated Fine-Tuning with Pre-trained Transformers
Jinyu Chen
Wenchao Xu
Song Guo
Junxiao Wang
Jie Zhang
Yining Qi
FedML
192
45
0
15 Nov 2022
Controllable Citation Sentence Generation with Language Models
Controllable Citation Sentence Generation with Language Models
Nianlong Gu
Richard H. R. Hahnloser
152
2
0
14 Nov 2022
Large Language Models Meet Harry Potter: A Bilingual Dataset for
  Aligning Dialogue Agents with Characters
Large Language Models Meet Harry Potter: A Bilingual Dataset for Aligning Dialogue Agents with Characters
Polydoros Giannouris
Yan Wang
Haiyun Jiang
Deng Cai
Yuhan Li
Ziyang Chen
Longyue Wang
Jia Li
328
12
0
13 Nov 2022
FPT: Improving Prompt Tuning Efficiency via Progressive Training
FPT: Improving Prompt Tuning Efficiency via Progressive TrainingConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Yufei Huang
Yujia Qin
Huadong Wang
Yichun Yin
Maosong Sun
Zhiyuan Liu
Qun Liu
VLMLRM
145
6
0
13 Nov 2022
Large-Scale Bidirectional Training for Zero-Shot Image Captioning
Large-Scale Bidirectional Training for Zero-Shot Image Captioning
Taehoon Kim
Mark A Marsden
Pyunghwan Ahn
Sangyun Kim
Sihaeng Lee
Alessandra Sala
S. Kim
VLM
220
5
0
13 Nov 2022
One-Time Model Adaptation to Heterogeneous Clients: An Intra-Client and
  Inter-Image Attention Design
One-Time Model Adaptation to Heterogeneous Clients: An Intra-Client and Inter-Image Attention Design
Yikai Yan
Chaoyue Niu
Fan Wu
Qinya Li
Shaojie Tang
Chengfei Lyu
Guihai Chen
164
0
0
11 Nov 2022
Multi-Head Adapter Routing for Cross-Task Generalization
Multi-Head Adapter Routing for Cross-Task GeneralizationNeural Information Processing Systems (NeurIPS), 2022
Lucas Caccia
Edoardo Ponti
Zhan Su
Matheus Pereira
Nicolas Le Roux
Alessandro Sordoni
142
34
0
07 Nov 2022
Motion Style Transfer: Modular Low-Rank Adaptation for Deep Motion
  Forecasting
Motion Style Transfer: Modular Low-Rank Adaptation for Deep Motion ForecastingConference on Robot Learning (CoRL), 2022
Parth Kothari
Danyang Li
Yuejiang Liu
Alexandre Alahi
TTAAI4TS
262
18
0
06 Nov 2022
On the Domain Adaptation and Generalization of Pretrained Language
  Models: A Survey
On the Domain Adaptation and Generalization of Pretrained Language Models: A Survey
Xu Guo
Han Yu
LM&MAVLM
308
34
0
06 Nov 2022
Resource-Efficient Transfer Learning From Speech Foundation Model Using
  Hierarchical Feature Fusion
Resource-Efficient Transfer Learning From Speech Foundation Model Using Hierarchical Feature FusionIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2022
Zhouyuan Huo
K. Sim
Yue Liu
DongSeon Hwang
Tara N. Sainath
Trevor Strohman
144
8
0
04 Nov 2022
Could Giant Pretrained Image Models Extract Universal Representations?
Could Giant Pretrained Image Models Extract Universal Representations?Neural Information Processing Systems (NeurIPS), 2022
Yutong Lin
Ze Liu
Zheng Zhang
Han Hu
Nanning Zheng
Stephen Lin
Yue Cao
VLM
180
10
0
03 Nov 2022
Two-stage LLM Fine-tuning with Less Specialization and More
  Generalization
Two-stage LLM Fine-tuning with Less Specialization and More GeneralizationInternational Conference on Learning Representations (ICLR), 2022
Yihan Wang
Si Si
Daliang Li
Michal Lukasik
Felix X. Yu
Cho-Jui Hsieh
Inderjit S Dhillon
Sanjiv Kumar
336
42
0
01 Nov 2022
Adapter-Based Extension of Multi-Speaker Text-to-Speech Model for New
  Speakers
Adapter-Based Extension of Multi-Speaker Text-to-Speech Model for New SpeakersInterspeech (Interspeech), 2022
Cheng-Ping Hsieh
Subhankar Ghosh
Boris Ginsburg
233
22
0
01 Nov 2022
AdaMix: Mixture-of-Adaptations for Parameter-efficient Model TuningConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Yaqing Wang
Sahaj Agarwal
Subhabrata Mukherjee
Xiaodong Liu
Jing Gao
Ahmed Hassan Awadallah
Jianfeng Gao
MoE
308
170
0
31 Oct 2022
GPS: Genetic Prompt Search for Efficient Few-shot Learning
GPS: Genetic Prompt Search for Efficient Few-shot LearningConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Hanwei Xu
Yujun Chen
Yulun Du
Nan Shao
Yanggang Wang
Haiyu Li
Zhilin Yang
VLM
147
47
0
31 Oct 2022
Parameter-Efficient Tuning Makes a Good Classification Head
Parameter-Efficient Tuning Makes a Good Classification HeadConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Zhuoyi Yang
Ming Ding
Yanhui Guo
Qingsong Lv
Jie Tang
VLM
265
16
0
30 Oct 2022
Differentiable Data Augmentation for Contrastive Sentence Representation
  Learning
Differentiable Data Augmentation for Contrastive Sentence Representation LearningConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Tianduo Wang
Wei Lu
SSL
135
11
0
29 Oct 2022
Inducer-tuning: Connecting Prefix-tuning and Adapter-tuning
Inducer-tuning: Connecting Prefix-tuning and Adapter-tuningConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Yifan Chen
Devamanyu Hazarika
Mahdi Namazifar
Yang Liu
Di Jin
Dilek Z. Hakkani-Tür
159
4
0
26 Oct 2022
Learning Better Intent Representations for Financial Open Intent
  Classification
Learning Better Intent Representations for Financial Open Intent Classification
Xianzhi Li
Will Aitken
Xiao-Dan Zhu
Stephen W. Thomas
AIFin
145
8
0
25 Oct 2022
Evaluating Parameter Efficient Learning for Generation
Evaluating Parameter Efficient Learning for GenerationConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Peng Xu
M. Patwary
Shrimai Prabhumoye
Virginia Adams
R. Prenger
Ming-Yu Liu
Nayeon Lee
Mohammad Shoeybi
Bryan Catanzaro
MoE
166
3
0
25 Oct 2022
Different Tunes Played with Equal Skill: Exploring a Unified
  Optimization Subspace for Delta Tuning
Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Delta Tuning
Jing Yi
Weize Chen
Yujia Qin
Yankai Lin
Ning Ding
Xu Han
Zhiyuan Liu
Maosong Sun
Jie Zhou
274
2
0
24 Oct 2022
NVIDIA FLARE: Federated Learning from Simulation to Real-World
NVIDIA FLARE: Federated Learning from Simulation to Real-WorldIEEE Data Engineering Bulletin (DEB), 2022
H. Roth
Yan Cheng
Yuhong Wen
Isaac Yang
Ziyue Xu
...
Daguang Xu
Nic Ma
Prerna Dogra
Mona G. Flores
Andrew Feng
FedMLAI4CE
306
139
0
24 Oct 2022
Exploring The Landscape of Distributional Robustness for Question
  Answering Models
Exploring The Landscape of Distributional Robustness for Question Answering ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Anas Awadalla
Mitchell Wortsman
Gabriel Ilharco
Sewon Min
Ian H. Magnusson
Hannaneh Hajishirzi
Ludwig Schmidt
ELMOODKELM
228
23
0
22 Oct 2022
Clip-Tuning: Towards Derivative-free Prompt Learning with a Mixture of
  Rewards
Clip-Tuning: Towards Derivative-free Prompt Learning with a Mixture of RewardsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Yekun Chai
Shuohuan Wang
Yu Sun
Hao Tian
Hua Wu
Haifeng Wang
VLM
264
19
0
21 Oct 2022
Efficiently Tuned Parameters are Task Embeddings
Efficiently Tuned Parameters are Task EmbeddingsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Wangchunshu Zhou
Canwen Xu
Julian McAuley
155
8
0
21 Oct 2022
Tele-Knowledge Pre-training for Fault Analysis
Tele-Knowledge Pre-training for Fault AnalysisIEEE International Conference on Data Engineering (ICDE), 2022
Zhuo Chen
Wen Zhang
Yufen Huang
Yin Hua
Yuxia Geng
...
Song Jiang
Zhaoyang Lian
Yuchen Ren
Lei Cheng
Hua-zeng Chen
290
22
0
20 Oct 2022
Late Prompt Tuning: A Late Prompt Could Be Better Than Many Prompts
Late Prompt Tuning: A Late Prompt Could Be Better Than Many PromptsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Xiangyang Liu
Tianxiang Sun
Xuanjing Huang
Xipeng Qiu
VLM
228
30
0
20 Oct 2022
Continued Pretraining for Better Zero- and Few-Shot Promptability
Continued Pretraining for Better Zero- and Few-Shot PromptabilityConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Zhaofeng Wu
IV RobertL.Logan
Pete Walsh
Akshita Bhagia
Dirk Groeneveld
Sameer Singh
Iz Beltagy
VLM
230
15
0
19 Oct 2022
Hidden State Variability of Pretrained Language Models Can Guide
  Computation Reduction for Transfer Learning
Hidden State Variability of Pretrained Language Models Can Guide Computation Reduction for Transfer LearningConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Shuo Xie
Jiahao Qiu
Ankita Pasad
Li Du
Qing Qu
Hongyuan Mei
235
16
0
18 Oct 2022
Tiny-Attention Adapter: Contexts Are More Important Than the Number of
  Parameters
Tiny-Attention Adapter: Contexts Are More Important Than the Number of ParametersConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Hongyu Zhao
Hao Tan
Hongyuan Mei
MoE
191
19
0
18 Oct 2022
Domain Specific Sub-network for Multi-Domain Neural Machine Translation
Domain Specific Sub-network for Multi-Domain Neural Machine Translation
Amr Hendy
M. Abdelghaffar
Mohamed Afify
Ahmed Tawfik
AI4CE
149
0
0
18 Oct 2022
Scaling & Shifting Your Features: A New Baseline for Efficient Model
  Tuning
Scaling & Shifting Your Features: A New Baseline for Efficient Model TuningNeural Information Processing Systems (NeurIPS), 2022
Dongze Lian
Daquan Zhou
Jiashi Feng
Xinchao Wang
354
335
0
17 Oct 2022
Keep Me Updated! Memory Management in Long-term Conversations
Keep Me Updated! Memory Management in Long-term ConversationsConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Sanghwan Bae
Donghyun Kwak
Soyoung Kang
Min Young Lee
Sungdong Kim
Yuin Jeong
Hyeri Kim
Sang-Woo Lee
W. Park
Nako Sung
306
62
0
17 Oct 2022
Accelerating Transfer Learning with Near-Data Computation on Cloud
  Object Stores
Accelerating Transfer Learning with Near-Data Computation on Cloud Object StoresACM Symposium on Cloud Computing (SoCC), 2022
Arsany Guirguis
Diana Petrescu
Florin Dinu
D. Quoc
Javier Picorel
R. Guerraoui
224
0
0
16 Oct 2022
Prompt Conditioned VAE: Enhancing Generative Replay for Lifelong
  Learning in Task-Oriented Dialogue
Prompt Conditioned VAE: Enhancing Generative Replay for Lifelong Learning in Task-Oriented DialogueConference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Ying Zhao
Yinhe Zheng
Zhiliang Tian
Chang Gao
Yu Bowen
Haiyang Yu
Yongbin Li
Jianguo Sun
Ningyu Zhang
CLLOffRL
218
14
0
14 Oct 2022
Multitask Pre-training of Modular Prompt for Chinese Few-Shot Learning
Multitask Pre-training of Modular Prompt for Chinese Few-Shot LearningAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Tianxiang Sun
Zhengfu He
Qinen Zhu
Xipeng Qiu
Xuanjing Huang
VLMVPVLM
201
24
0
14 Oct 2022
DyLoRA: Parameter Efficient Tuning of Pre-trained Models using Dynamic
  Search-Free Low-Rank Adaptation
DyLoRA: Parameter Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank AdaptationConference of the European Chapter of the Association for Computational Linguistics (EACL), 2022
Mojtaba Valipour
Mehdi Rezagholizadeh
I. Kobyzev
A. Ghodsi
417
240
0
14 Oct 2022
Previous
123...169170171172173
Next