ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.22355
21
0

Look Within or Look Beyond? A Theoretical Comparison Between Parameter-Efficient and Full Fine-Tuning

28 May 2025
Yongkang Liu
Xingle Xu
Ercong Nie
Zijing Wang
Shi Feng
Daling Wang
Qian Li
Hinrich Schutze
ArXiv (abs)PDFHTML
Main:9 Pages
3 Figures
Bibliography:6 Pages
10 Tables
Appendix:9 Pages
Abstract

Parameter-Efficient Fine-Tuning (PEFT) methods achieve performance comparable to Full Fine-Tuning (FFT) while requiring significantly fewer computing resources, making it the go-to choice for researchers. We find that although PEFT can achieve competitive results on some benchmarks, its performance falls short of FFT in complex tasks, such as reasoning and instruction-based fine-tuning. In this paper, we compare the characteristics of PEFT and FFT in terms of representational capacity and robustness based on optimization theory. We theoretically demonstrate that PEFT is a strict subset of FFT. By providing theoretical upper bounds for PEFT, we show that the limited parameter space constrains the model's representational ability, making it more susceptible to perturbations. Experiments on 15 datasets encompassing classification, generation, reasoning, instruction fine-tuning tasks and 11 adversarial test sets validate our theories. We hope that these results spark further research beyond the realms of well established PEFT. The source code is in the anonymous Github repository\footnote{this https URL}.

View on arXiv
@article{liu2025_2505.22355,
  title={ Look Within or Look Beyond? A Theoretical Comparison Between Parameter-Efficient and Full Fine-Tuning },
  author={ Yongkang Liu and Xingle Xu and Ercong Nie and Zijing Wang and Shi Feng and Daling Wang and Qian Li and Hinrich Schütze },
  journal={arXiv preprint arXiv:2505.22355},
  year={ 2025 }
}
Comments on this paper