ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.03794
18
0

Entropy-Based Block Pruning for Efficient Large Language Models

4 April 2025
Liangwei Yang
Yuhui Xu
Juntao Tan
Doyen Sahoo
S.
Caiming Xiong
H. Wang
Shelby Heinecke
    AAML
ArXivPDFHTML
Abstract

As large language models continue to scale, their growing computational and storage demands pose significant challenges for real-world deployment. In this work, we investigate redundancy within Transformer-based models and propose an entropy-based pruning strategy to enhance efficiency while maintaining performance. Empirical analysis reveals that the entropy of hidden representations decreases in the early blocks but progressively increases across most subsequent blocks. This trend suggests that entropy serves as a more effective measure of information richness within computation blocks. Unlike cosine similarity, which primarily captures geometric relationships, entropy directly quantifies uncertainty and information content, making it a more reliable criterion for pruning. Extensive experiments demonstrate that our entropy-based pruning approach surpasses cosine similarity-based methods in reducing model size while preserving accuracy, offering a promising direction for efficient model deployment.

View on arXiv
@article{yang2025_2504.03794,
  title={ Entropy-Based Block Pruning for Efficient Large Language Models },
  author={ Liangwei Yang and Yuhui Xu and Juntao Tan and Doyen Sahoo and Silvio Savarese and Caiming Xiong and Huan Wang and Shelby Heinecke },
  journal={arXiv preprint arXiv:2504.03794},
  year={ 2025 }
}
Comments on this paper