Towards Symmetric Low-Rank Adapters

Abstract
In this paper, we introduce Symmetric Low-Rank Adapters, an optimized variant of LoRA with even fewer weights. This method utilizes Low-Rank Symmetric Weight Matrices to learn downstream tasks more efficiently. Traditional LoRA accumulates fine-tuning weights with the original pre-trained weights via a Singular Value Decomposition (SVD) like approach, i.e., model weights are fine-tuned via updates of the form (where , , and is the rank of the merged weight matrix). In contrast, our approach, named SymLoRA, represents fine-tuning weights as a Spectral Decomposition, i.e., , where and . SymLoRA requires approximately half of the finetuning weights. Here, we show that this approach has negligible losses in downstream efficacy.
View on arXiv@article{panoutsos2025_2504.03719, title={ Towards Symmetric Low-Rank Adapters }, author={ Tales Panoutsos and Rodrygo L. T. Santos and Flavio Figueiredo }, journal={arXiv preprint arXiv:2504.03719}, year={ 2025 } }
Comments on this paper