561
v1v2v3 (latest)

Selective Aggregation for Low-Rank Adaptation in Federated Learning

International Conference on Learning Representations (ICLR), 2024
Main:10 Pages
6 Figures
Bibliography:5 Pages
10 Tables
Appendix:10 Pages
Abstract

We investigate LoRA in federated learning through the lens of the asymmetry analysis of the learned AA and BB matrices. In doing so, we uncover that AA matrices are responsible for learning general knowledge, while BB matrices focus on capturing client-specific knowledge. Based on this finding, we introduce Federated Share-A Low-Rank Adaptation (FedSA-LoRA), which employs two low-rank trainable matrices AA and BB to model the weight update, but only AA matrices are shared with the server for aggregation. Moreover, we delve into the relationship between the learned AA and BB matrices in other LoRA variants, such as rsLoRA and VeRA, revealing a consistent pattern. Consequently, we extend our FedSA-LoRA method to these LoRA variants, resulting in FedSA-rsLoRA and FedSA-VeRA. In this way, we establish a general paradigm for integrating LoRA with FL, offering guidance for future work on subsequent LoRA variants combined with FL. Extensive experimental results on natural language understanding and generation tasks demonstrate the effectiveness of the proposed method. Our code is available atthis https URL.

View on arXiv
Comments on this paper