ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2510.03731
120
0
v1v2v3 (latest)

Optimizing Fine-Tuning through Advanced Initialization Strategies for Low-Rank Adaptation

4 October 2025
Yongfu Xue
    AI4CE
ArXiv (abs)PDFHTMLGithub
Main:5 Pages
6 Figures
Bibliography:2 Pages
7 Tables
Abstract

The rapid development of parameter-efficient fine-tuning methods has noticeably improved the efficiency of adapting large language models. Among these, LoRA has gained widespread popularity due to its strong balance of effectiveness and parameter efficiency. However, LoRA relies on initializing two low-rank matrices whose product is zero, which limits its ability to effectively activate and leverage the original model weights-creating a potential bottleneck for optimal performance. To address this limitation, we propose \textbf{IniLoRA}, a novel initialization strategy that initializes the low-rank matrices to closely approximate the original model weights. Experimental results indicate that IniLoRA achieves better performance than LoRA across a range of models and tasks. Additionally, we introduce two variants, IniLoRA-α\alphaα and IniLoRA-β\betaβ, both leveraging distinct initialization methods to enhance performance further.

View on arXiv
Comments on this paper