ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.11867
25
13

LoRA Training in the NTK Regime has No Spurious Local Minima

19 February 2024
Uijeong Jang
Jason D. Lee
Ernest K. Ryu
ArXivPDFHTML
Abstract

Low-rank adaptation (LoRA) has become the standard approach for parameter-efficient fine-tuning of large language models (LLM), but our theoretical understanding of LoRA has been limited. In this work, we theoretically analyze LoRA fine-tuning in the neural tangent kernel (NTK) regime with NNN data points, showing: (i) full fine-tuning (without LoRA) admits a low-rank solution of rank r≲Nr\lesssim \sqrt{N}r≲N​; (ii) using LoRA with rank r≳Nr\gtrsim \sqrt{N}r≳N​ eliminates spurious local minima, allowing gradient descent to find the low-rank solutions; (iii) the low-rank solution found using LoRA generalizes well.

View on arXiv
Comments on this paper