21
0

Towards Symmetric Low-Rank Adapters

Abstract

In this paper, we introduce Symmetric Low-Rank Adapters, an optimized variant of LoRA with even fewer weights. This method utilizes Low-Rank Symmetric Weight Matrices to learn downstream tasks more efficiently. Traditional LoRA accumulates fine-tuning weights with the original pre-trained weights via a Singular Value Decomposition (SVD) like approach, i.e., model weights are fine-tuned via updates of the form BABA (where BRn×rB \in \mathbb{R}^{n\times r}, ARr×nA \in \mathbb{R}^{r\times n}, and rr is the rank of the merged weight matrix). In contrast, our approach, named SymLoRA, represents fine-tuning weights as a Spectral Decomposition, i.e., Qdiag(Λ)QTQ \, diag(\Lambda)\, Q^T, where QRn×rQ \in \mathbb{R}^{n\times r} and ΛRr\Lambda \in \mathbb{R}^r. SymLoRA requires approximately half of the finetuning weights. Here, we show that this approach has negligible losses in downstream efficacy.

View on arXiv
@article{panoutsos2025_2504.03719,
  title={ Towards Symmetric Low-Rank Adapters },
  author={ Tales Panoutsos and Rodrygo L. T. Santos and Flavio Figueiredo },
  journal={arXiv preprint arXiv:2504.03719},
  year={ 2025 }
}
Comments on this paper