ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.12354
  4. Cited By
LoRA+: Efficient Low Rank Adaptation of Large Models

LoRA+: Efficient Low Rank Adaptation of Large Models

19 February 2024
Soufiane Hayou
Nikhil Ghosh
Bin Yu
    AI4CE
ArXivPDFHTML

Papers citing "LoRA+: Efficient Low Rank Adaptation of Large Models"

32 / 32 papers shown
Title
ABS-Mamba: SAM2-Driven Bidirectional Spiral Mamba Network for Medical Image Translation
ABS-Mamba: SAM2-Driven Bidirectional Spiral Mamba Network for Medical Image Translation
Feng Yuan
Yifan Gao
Wenbin Wu
Keqing Wu
Xiaotong Guo
Jie Jiang
Xin Gao
Mamba
41
0
0
12 May 2025
HSplitLoRA: A Heterogeneous Split Parameter-Efficient Fine-Tuning Framework for Large Language Models
HSplitLoRA: A Heterogeneous Split Parameter-Efficient Fine-Tuning Framework for Large Language Models
Zheng Lin
Yuxin Zhang
Zhe Chen
Zihan Fang
Xianhao Chen
Praneeth Vepakomma
Wei Ni
Jun-Jie Luo
Yue Gao
MoE
34
1
0
05 May 2025
SEFE: Superficial and Essential Forgetting Eliminator for Multimodal Continual Instruction Tuning
SEFE: Superficial and Essential Forgetting Eliminator for Multimodal Continual Instruction Tuning
Jinpeng Chen
Runmin Cong
Yuzhi Zhao
Hongzheng Yang
Guangneng Hu
H. Ip
Sam Kwong
CLL
KELM
70
0
0
05 May 2025
Memory-Efficient LLM Training by Various-Grained Low-Rank Projection of Gradients
Memory-Efficient LLM Training by Various-Grained Low-Rank Projection of Gradients
Yezhen Wang
Zhouhao Yang
Brian K Chen
Fanyi Pu
Bo-wen Li
Tianyu Gao
Kenji Kawaguchi
36
0
0
03 May 2025
DL-QAT: Weight-Decomposed Low-Rank Quantization-Aware Training for Large Language Models
DL-QAT: Weight-Decomposed Low-Rank Quantization-Aware Training for Large Language Models
Wenjin Ke
Zhe Li
D. Li
Lu Tian
E. Barsoum
MQ
35
1
0
12 Apr 2025
Towards Symmetric Low-Rank Adapters
Towards Symmetric Low-Rank Adapters
Tales Panoutsos
Rodrygo L. T. Santos
Flavio Figueiredo
26
0
0
29 Mar 2025
K-LoRA: Unlocking Training-Free Fusion of Any Subject and Style LoRAs
K-LoRA: Unlocking Training-Free Fusion of Any Subject and Style LoRAs
Ziheng Ouyang
Zhen Li
Qibin Hou
MoMe
OffRL
101
2
0
25 Feb 2025
Make LoRA Great Again: Boosting LoRA with Adaptive Singular Values and Mixture-of-Experts Optimization Alignment
Make LoRA Great Again: Boosting LoRA with Adaptive Singular Values and Mixture-of-Experts Optimization Alignment
Chenghao Fan
Zhenyi Lu
Sichen Liu
Xiaoye Qu
Wei Wei
Chengfeng Gu
Yu-Xi Cheng
MoE
91
0
0
24 Feb 2025
Low Tensor-Rank Adaptation of Kolmogorov--Arnold Networks
Low Tensor-Rank Adaptation of Kolmogorov--Arnold Networks
Yihang Gao
Michael K. Ng
Vincent Y. F. Tan
72
0
0
17 Feb 2025
GoRA: Gradient-driven Adaptive Low Rank Adaptation
GoRA: Gradient-driven Adaptive Low Rank Adaptation
Haonan He
Peng Ye
Yuchen Ren
Yuan Yuan
Lei Chen
AI4TS
AI4CE
111
0
0
13 Feb 2025
Rank Also Matters: Hierarchical Configuration for Mixture of Adapter Experts in LLM Fine-Tuning
Rank Also Matters: Hierarchical Configuration for Mixture of Adapter Experts in LLM Fine-Tuning
Peizhuang Cong
Wenpu Liu
Wenhan Yu
Haochen Zhao
Tong Yang
ALM
MoE
74
0
0
06 Feb 2025
Robust Federated Finetuning of LLMs via Alternating Optimization of LoRA
Robust Federated Finetuning of LLMs via Alternating Optimization of LoRA
Shuangyi Chen
Yuanxin Guo
Yue Ju
Harik Dalal
Ashish Khisti
48
1
0
03 Feb 2025
Sparse High Rank Adapters
Sparse High Rank Adapters
K. Bhardwaj
N. Pandey
Sweta Priyadarshi
Viswanath Ganapathy
Rafael Esteves
...
P. Whatmough
Risheek Garrepalli
M. V. Baalen
Harris Teague
Markus Nagel
MQ
33
4
0
28 Jan 2025
Aggregating Low Rank Adapters in Federated Fine-tuning
Aggregating Low Rank Adapters in Federated Fine-tuning
Evelyn Trautmann
Ian Hales
Martin F. Volk
AI4CE
FedML
39
0
0
10 Jan 2025
Neural Precision Polarization: Simplifying Neural Network Inference with
  Dual-Level Precision
Neural Precision Polarization: Simplifying Neural Network Inference with Dual-Level Precision
Dinithi Jayasuriya
Nastaran Darabi
Maeesha Binte Hashem
A. R. Trivedi
MQ
21
1
0
06 Nov 2024
MambaPEFT: Exploring Parameter-Efficient Fine-Tuning for Mamba
MambaPEFT: Exploring Parameter-Efficient Fine-Tuning for Mamba
Masakazu Yoshimura
Teruaki Hayashi
Yota Maeda
Mamba
72
2
0
06 Nov 2024
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
L. Wang
Sheng Chen
Linnan Jiang
Shu Pan
Runze Cai
Sen Yang
Fei Yang
44
3
0
24 Oct 2024
MTL-LoRA: Low-Rank Adaptation for Multi-Task Learning
MTL-LoRA: Low-Rank Adaptation for Multi-Task Learning
Yaming Yang
Dilxat Muhtar
Yelong Shen
Yuefeng Zhan
Jianfeng Liu
...
Denvy Deng
Feng Sun
Qi Zhang
Weizhu Chen
Yunhai Tong
MoE
MoMe
38
2
0
12 Oct 2024
Parameter-Efficient Fine-Tuning of State Space Models
Parameter-Efficient Fine-Tuning of State Space Models
Kevin Galim
Wonjun Kang
Yuchen Zeng
H. Koo
Kangwook Lee
29
4
0
11 Oct 2024
LoRTA: Low Rank Tensor Adaptation of Large Language Models
LoRTA: Low Rank Tensor Adaptation of Large Language Models
Ignacio Hounie
Charilaos I. Kanatsoulis
Arnuv Tandon
Alejandro Ribeiro
29
0
0
05 Oct 2024
Theoretical Insights into Fine-Tuning Attention Mechanism: Generalization and Optimization
Theoretical Insights into Fine-Tuning Attention Mechanism: Generalization and Optimization
Xinhao Yao
Hongjin Qian
Xiaolin Hu
Gengze Xu
Wei Liu
Jian Luan
B. Wang
Y. Liu
48
0
0
03 Oct 2024
Selective Aggregation for Low-Rank Adaptation in Federated Learning
Selective Aggregation for Low-Rank Adaptation in Federated Learning
Pengxin Guo
Shuang Zeng
Y. Wang
Huijie Fan
Feifei Wang
Liangqiong Qu
FedML
36
8
0
02 Oct 2024
Advancing Depth Anything Model for Unsupervised Monocular Depth Estimation in Endoscopy
Advancing Depth Anything Model for Unsupervised Monocular Depth Estimation in Endoscopy
Bojian Li
Bo Liu
Jinghua Yue
F. Zhou
Fugen Zhou
MedIm
MDE
45
2
0
12 Sep 2024
LoRA-Pro: Are Low-Rank Adapters Properly Optimized?
LoRA-Pro: Are Low-Rank Adapters Properly Optimized?
Zhengbo Wang
Jian Liang
Ran He
Zilei Wang
Tieniu Tan
50
15
0
25 Jul 2024
On Giant's Shoulders: Effortless Weak to Strong by Dynamic Logits Fusion
On Giant's Shoulders: Effortless Weak to Strong by Dynamic Logits Fusion
Chenghao Fan
Zhenyi Lu
Wei Wei
Jie Tian
Xiaoye Qu
Dangyang Chen
Yu Cheng
MoMe
44
5
0
17 Jun 2024
MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning
MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning
Hanqing Wang
Zeguan Xiao
Shuo Wang
Guanhua Chen
Guanhua Chen
30
19
0
13 Jun 2024
The Impact of Initialization on LoRA Finetuning Dynamics
The Impact of Initialization on LoRA Finetuning Dynamics
Soufiane Hayou
Nikhil Ghosh
Bin Yu
AI4CE
34
10
0
12 Jun 2024
Low-Rank Adaption on Transformer-based Oriented Object Detector for
  Satellite Onboard Processing of Remote Sensing Images
Low-Rank Adaption on Transformer-based Oriented Object Detector for Satellite Onboard Processing of Remote Sensing Images
Xinyang Pu
Feng Xu
32
3
0
04 Jun 2024
SuperLoRA: Parameter-Efficient Unified Adaptation of Multi-Layer
  Attention Modules
SuperLoRA: Parameter-Efficient Unified Adaptation of Multi-Layer Attention Modules
Xiangyu Chen
Jing Liu
Ye Wang
Pu Wang
Matthew Brand
Guanghui Wang
T. Koike-Akino
35
7
0
18 Mar 2024
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,835
0
18 Apr 2021
Stable ResNet
Stable ResNet
Soufiane Hayou
Eugenio Clerico
Bo He
George Deligiannidis
Arnaud Doucet
Judith Rousseau
ODL
SSeg
46
51
0
24 Oct 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,943
0
20 Apr 2018
1