31
0

Theoretical Insights into Fine-Tuning Attention Mechanism: Generalization and Optimization

Xinhao Yao
Hongjin Qian
Xiaolin Hu
Gengze Xu
Yong Liu
Wei Liu
Jian Luan
Bin Wang
Abstract

Large Language Models (LLMs), built on Transformer architectures, exhibit remarkable generalization across a wide range of tasks. However, fine-tuning these models for specific tasks remains resource-intensive due to their extensive parameterization. In this paper, we investigate two remarkable phenomena related to the attention mechanism during the fine-tuning of LLMs. The first phenomenon, termed "Unequal Importance of Attention Matrices," highlights the impact of fine-tuning different weight matrices. It shows that optimizing the Wv\mathbf{W}_v matrix yields significantly better performance than optimizing the Wk\mathbf{W}_k matrix. Fine-tuning only the Wq\mathbf{W}_q and Wv\mathbf{W}_v matrices is computationally efficient while delivering results comparable to, or even better than fine-tuning all three matrices (Wq\mathbf{W}_q, Wk\mathbf{W}_k, and Wv\mathbf{W}_v). The second phenomenon, "Attention Matrices with Customized Learning Rate Leads to Better Convergence," emphasizes the importance of assigning distinct learning rates to these matrices. Specifically, a higher learning rate for the Wv\mathbf{W}_v matrix compared to Wq\mathbf{W}_q and Wk\mathbf{W}_k accelerates convergence and improves performance. Building on these insights, we propose a new strategy that improves fine-tuning efficiency in terms of both storage and time. Experimental results on benchmark datasets validate the effectiveness of this approach, supporting our theoretical findings. Our analysis lays the theoretical groundwork for configuring and improving lightweight algorithms in LLMs fine-tuning.

View on arXiv
@article{yao2025_2410.02247,
  title={ Theoretical Insights into Fine-Tuning Attention Mechanism: Generalization and Optimization },
  author={ Xinhao Yao and Hongjin Qian and Xiaolin Hu and Gengze Xu and Yong Liu and Wei Liu and Jian Luan and Bin Wang },
  journal={arXiv preprint arXiv:2410.02247},
  year={ 2025 }
}
Comments on this paper