23
1

A Single Linear Layer Yields Task-Adapted Low-Rank Matrices

Abstract

Low-Rank Adaptation (LoRA) is a widely used Parameter-Efficient Fine-Tuning (PEFT) method that updates an initial weight matrix W0W_0 with a delta matrix ΔW\Delta W consisted by two low-rank matrices AA and BB. A previous study suggested that there is correlation between W0W_0 and ΔW\Delta W. In this study, we aim to delve deeper into relationships between W0W_0 and low-rank matrices AA and BB to further comprehend the behavior of LoRA. In particular, we analyze a conversion matrix that transform W0W_0 into low-rank matrices, which encapsulates information about the relationships. Our analysis reveals that the conversion matrices are similar across each layer. Inspired by these findings, we hypothesize that a single linear layer, which takes each layer's W0W_0 as input, can yield task-adapted low-rank matrices. To confirm this hypothesis, we devise a method named Conditionally Parameterized LoRA (CondLoRA) that updates initial weight matrices with low-rank matrices derived from a single linear layer. Our empirical results show that CondLoRA maintains a performance on par with LoRA, despite the fact that the trainable parameters of CondLoRA are fewer than those of LoRA. Therefore, we conclude that "a single linear layer yields task-adapted low-rank matrices."

View on arXiv
Comments on this paper