85

MASA: Rethinking the Representational Bottleneck in LoRA with Multi-A Shared Adaptation

Main:7 Pages
5 Figures
Bibliography:2 Pages
9 Tables
Appendix:5 Pages
Abstract

Low-Rank Adaptation (LoRA) has emerged as a dominant method in Parameter-Efficient Fine-Tuning (PEFT) for large language models, which augments the transformer layer with one down-projection AA and one up-projection BB. However, LoRA's reliance on a single down-projection matrix (AA) creates a representational bottleneck, as this solitary feature extractor is inherently insufficient for capturing the diverse signals required by complex tasks. This motivates our architectural shift to focus on enriching the feature adaptation to improve the downstream task adaptation ability. We propose MASA (Multi-AA Shared Adaptation), an architecture that implements a multi-AA, single-BB structure where the multi-AA expert ensemble is asymmetrically shared across layers to ensure parameter efficiency. In MASA, these specialized experts capture diverse features, which are then integrated by a single, layer-specific BB-matrix. The effectiveness and versatility of our method are validated through a comprehensive suite of experiments spanning multi-domain generalization, single-domain specialization, and multi-task reasoning. For example, on the MMLU benchmark, MASA achieves an average accuracy of 59.62%, outperforming the standard LoRA by 1.08 points (a relative improvement of 1.84%) with comparable learnable parameters of 0.52%.

View on arXiv
Comments on this paper