ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.01900
15
0

High-Layer Attention Pruning with Rescaling

2 July 2025
Songtao Liu
Peng Liu
ArXiv (abs)PDFHTML
Main:15 Pages
3 Figures
Bibliography:2 Pages
8 Tables
Appendix:5 Pages
Abstract

Pruning is a highly effective approach for compressing large language models (LLMs), significantly reducing inference latency. However, conventional training-free structured pruning methods often employ a heuristic metric that indiscriminately removes some attention heads across all pruning layers, without considering their positions within the network architecture. In this work, we propose a novel pruning algorithm that strategically prunes attention heads in the model's higher layers. Since the removal of attention heads can alter the magnitude of token representations, we introduce an adaptive rescaling parameter that calibrates the representation scale post-pruning to counteract this effect. We conduct comprehensive experiments on a wide range of LLMs, including LLaMA3.1-8B, Mistral-7B-v0.3, Qwen2-7B, and Gemma2-9B. Our evaluation includes both generation and discriminative tasks across 27 datasets. The results consistently demonstrate that our method outperforms existing structured pruning methods. This improvement is particularly notable in generation tasks, where our approach significantly outperforms existing baselines.

View on arXiv
@article{liu2025_2507.01900,
  title={ High-Layer Attention Pruning with Rescaling },
  author={ Songtao Liu and Peng Liu },
  journal={arXiv preprint arXiv:2507.01900},
  year={ 2025 }
}
Comments on this paper