49
0

LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation

Abstract

Low-Rank Adaptation (LoRA) has emerged as a popular parameter-efficient fine-tuning (PEFT) method for Large Language Models (LLMs), yet it still incurs notable overhead and suffers from parameter interference in multi-task scenarios. We propose LoRA with Reduced Interference (LoRI), a simple yet effective approach that freezes the projection matrices AA as random projections and sparsifies the matrices BB using task-specific masks. This design substantially reduces the number of trainable parameters while maintaining strong task performance. Moreover, LoRI minimizes cross-task interference in adapter merging by leveraging the orthogonality between adapter subspaces, and supports continual learning by using sparsity to mitigate catastrophic forgetting. Extensive experiments across natural language understanding, mathematical reasoning, code generation, and safety alignment tasks demonstrate that LoRI outperforms full fine-tuning and existing PEFT methods, while using up to 95% fewer trainable parameters than LoRA. In multi-task experiments, LoRI enables effective adapter merging and continual learning with reduced cross-task interference. Code is available at:this https URL

View on arXiv
@article{zhang2025_2504.07448,
  title={ LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation },
  author={ Juzheng Zhang and Jiacheng You and Ashwinee Panda and Tom Goldstein },
  journal={arXiv preprint arXiv:2504.07448},
  year={ 2025 }
}
Comments on this paper