ALPS: Attention Localization and Pruning Strategy for Efficient Alignment of Large Language Models

Aligning general-purpose large language models (LLMs) to downstream tasks often incurs significant training adjustment costs. Prior research has explored various avenues to enhance alignment efficiency, primarily through minimal-data training or data-driven activations to identify key attention heads. However, these approaches inherently introduce data dependency, which hinders generalization and reusability. To address this issue and enhance model alignment efficiency, we propose the \textit{\textbf{A}ttention \textbf{L}ocalization and \textbf{P}runing \textbf{S}trategy (\textbf{ALPS})}, an efficient algorithm that localizes the most task-sensitive attention heads and prunes by restricting attention training updates to these heads, thereby reducing alignment costs. Experimental results demonstrate that our method activates only \textbf{10\%} of attention parameters during fine-tuning while achieving a \textbf{2\%} performance improvement over baselines on three tasks. Moreover, the identified task-specific heads are transferable across datasets and mitigate knowledge forgetting. Our work and findings provide a novel perspective on efficient LLM alignment. The code is available atthis https URL.
View on arXiv@article{chen2025_2505.18799, title={ ALPS: Attention Localization and Pruning Strategy for Efficient Alignment of Large Language Models }, author={ Hao Chen and Haoze Li and Zhiqing Xiao and Lirong Gao and Qi Zhang and Xiaomeng Hu and Ningtao Wang and Xing Fu and Junbo Zhao }, journal={arXiv preprint arXiv:2505.18799}, year={ 2025 } }