68

OPLoRA: Orthogonal Projection LoRA Prevents Catastrophic Forgetting during Parameter-Efficient Fine-Tuning

Main:7 Pages
2 Figures
Bibliography:2 Pages
7 Tables
Abstract

Low-Rank Adaptation (LoRA) enables efficient fine-tuning of large language models but suffers from catastrophic forgetting when learned updates interfere with the dominant singular directions that encode essential pre-trained knowledge. We propose Orthogonal Projection LoRA (OPLoRA), a theoretically grounded approach that prevents this interference through double-sided orthogonal projections. By decomposing frozen weights via SVD, OPLoRA constrains LoRA updates to lie entirely within the orthogonal complement of the top-kk singular subspace using projections PL=IUkUkP_L = I - U_k U_k^\top and PR=IVkVkP_R = I - V_k V_k^\top. We prove that this construction exactly preserves the top-kk singular triples, providing mathematical guarantees for knowledge retention. To quantify subspace interference, we introduce ρk\rho_k, a metric measuring update alignment with dominant directions. Extensive experiments across commonsense reasoning, mathematics, and code generation demonstrate that OPLoRA significantly reduces forgetting while maintaining competitive task-specific performance on LLaMA-2 7B and Qwen2.5 7B, establishing orthogonal projection as an effective mechanism for knowledge preservation in parameter-efficient fine-tuning.

View on arXiv
Comments on this paper