ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.13282
20
0

LIFT+: Lightweight Fine-Tuning for Long-Tail Learning

17 April 2025
Jiang-Xin Shi
Tong Wei
Yu-Feng Li
ArXivPDFHTML
Abstract

The fine-tuning paradigm has emerged as a prominent approach for addressing long-tail learning tasks in the era of foundation models. However, the impact of fine-tuning strategies on long-tail learning performance remains unexplored. In this work, we disclose that existing paradigms exhibit a profound misuse of fine-tuning methods, leaving significant room for improvement in both efficiency and accuracy. Specifically, we reveal that heavy fine-tuning (fine-tuning a large proportion of model parameters) can lead to non-negligible performance deterioration on tail classes, whereas lightweight fine-tuning demonstrates superior effectiveness. Through comprehensive theoretical and empirical validation, we identify this phenomenon as stemming from inconsistent class conditional distributions induced by heavy fine-tuning. Building on this insight, we propose LIFT+, an innovative lightweight fine-tuning framework to optimize consistent class conditions. Furthermore, LIFT+ incorporates semantic-aware initialization, minimalist data augmentation, and test-time ensembling to enhance adaptation and generalization of foundation models. Our framework provides an efficient and accurate pipeline that facilitates fast convergence and model compactness. Extensive experiments demonstrate that LIFT+ significantly reduces both training epochs (from ∼\sim∼100 to ≤\leq≤15) and learned parameters (less than 1%), while surpassing state-of-the-art approaches by a considerable margin. The source code is available atthis https URL.

View on arXiv
@article{shi2025_2504.13282,
  title={ LIFT+: Lightweight Fine-Tuning for Long-Tail Learning },
  author={ Jiang-Xin Shi and Tong Wei and Yu-Feng Li },
  journal={arXiv preprint arXiv:2504.13282},
  year={ 2025 }
}
Comments on this paper