ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.16801
18
0

Controlled Low-Rank Adaptation with Subspace Regularization for Continued Training on Large Language Models

22 October 2024
Yuheng Lu
Bingshuo Qian
Caixia Yuan
Huixing Jiang
Xiaojie Wang
    CLL
ArXivPDFHTML
Abstract

Large language models (LLMs) exhibit remarkable capabilities in natural language processing but face catastrophic forgetting when learning new tasks, where adaptation to a new domain leads to a substantial decline in performance on previous tasks. In this paper, we propose Controlled LoRA (CLoRA), a sub-space regularization method on LoRA structure. Aiming to reduce the scale of output change while introduce minimal constraint on model capacity, CLoRA imposes constraint on the direction of updating matrix's null space. Experimental results on one-stage LLM finetuning tasks and continual learning settings highlight the superority of CLoRA as a effective parameter efficient finetuning method with catastrophic forgettingthis http URLinvestigation for model parameters indicates that CLoRA effectively balances the trade-off between model capacity and degree of forgetting.

View on arXiv
@article{lu2025_2410.16801,
  title={ Controlled Low-Rank Adaptation with Subspace Regularization for Continued Training on Large Language Models },
  author={ Yuheng Lu and Bingshuo Qian and Caixia Yuan and Huixing Jiang and Xiaojie Wang },
  journal={arXiv preprint arXiv:2410.16801},
  year={ 2025 }
}
Comments on this paper