NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models

Fine-tuning pre-trained models often yields state-of-the-art performance but is computationally expensive when updating all parameters. Parameter-efficient fine-tuning (PEFT) methods, such as Low-Rank Adaptation (LoRA), address this by freezing pre-trained weights and introducing low-rank matrices. However, because LoRA relies on low-rank decomposition, it struggles to capture complex nonlinear dynamics and optimal optimization trajectories, resulting in a performance gap relative to full fine-tuning and inefficient parameter utilization. To overcome these issues, we propose NEAT, a nonlinear PEFT approach that employs a lightweight neural network to learn a nonlinear transformation of the pre-trained weights, thereby better approximating cumulative weight updates. Our theoretical analysis shows that NEAT achieves greater efficiency than LoRA while maintaining equivalent expressivity. Extensive experiments on four benchmarks and over twenty datasets demonstrate that NEAT significantly outperforms state-of-the-art baselines in both vision and text tasks.
View on arXiv@article{zhong2025_2410.01870, title={ NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models }, author={ Yibo Zhong and Haoxiang Jiang and Lincan Li and Ryumei Nakada and Tianci Liu and Linjun Zhang and Huaxiu Yao and Haoyu Wang }, journal={arXiv preprint arXiv:2410.01870}, year={ 2025 } }