Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.16842
Cited By
Asymmetry in Low-Rank Adapters of Foundation Models
26 February 2024
Jiacheng Zhu
Kristjan Greenewald
Kimia Nadjahi
Haitz Sáez de Ocáriz Borde
Rickard Brüel-Gabrielsson
Leshem Choshen
Marzyeh Ghassemi
Mikhail Yurochkin
Justin Solomon
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Asymmetry in Low-Rank Adapters of Foundation Models"
9 / 9 papers shown
Title
DL-QAT: Weight-Decomposed Low-Rank Quantization-Aware Training for Large Language Models
Wenjin Ke
Zhe Li
D. Li
Lu Tian
E. Barsoum
MQ
35
1
0
12 Apr 2025
Parameter Efficient Merging for Multimodal Large Language Models with Complementary Parameter Adaptation
Fanhu Zeng
Haiyang Guo
Fei Zhu
Li Shen
Hao Tang
MoMe
49
1
0
24 Feb 2025
Robust Federated Finetuning of LLMs via Alternating Optimization of LoRA
Shuangyi Chen
Yuanxin Guo
Yue Ju
Harik Dalal
Ashish Khisti
48
1
0
03 Feb 2025
Theoretical Insights into Fine-Tuning Attention Mechanism: Generalization and Optimization
Xinhao Yao
Hongjin Qian
Xiaolin Hu
Gengze Xu
Wei Liu
Jian Luan
B. Wang
Y. Liu
48
0
0
03 Oct 2024
Selective Aggregation for Low-Rank Adaptation in Federated Learning
Pengxin Guo
Shuang Zeng
Y. Wang
Huijie Fan
Feifei Wang
Liangqiong Qu
FedML
36
8
0
02 Oct 2024
Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead
Rickard Brüel-Gabrielsson
Jiacheng Zhu
Onkar Bhardwaj
Leshem Choshen
Kristjan Greenewald
Mikhail Yurochkin
Justin Solomon
28
5
0
17 Jun 2024
The Impact of Initialization on LoRA Finetuning Dynamics
Soufiane Hayou
Nikhil Ghosh
Bin Yu
AI4CE
34
10
0
12 Jun 2024
SuperLoRA: Parameter-Efficient Unified Adaptation of Multi-Layer Attention Modules
Xiangyu Chen
Jing Liu
Ye Wang
Pu Wang
Matthew Brand
Guanghui Wang
T. Koike-Akino
35
7
0
18 Mar 2024
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,943
0
20 Apr 2018
1