ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.02380
22
0

EntroLLM: Entropy Encoded Weight Compression for Efficient Large Language Model Inference on Edge Devices

5 May 2025
Arnab Sanyal
Prithwish Mukherjee
Gourav Datta
Sandeep P. Chinchali
    MQ
ArXivPDFHTML
Abstract

Large Language Models (LLMs) demonstrate exceptional performance across various tasks, but their large storage and computational requirements constrain their deployment on edge devices. To address this, we propose EntroLLM, a novel compression framework that integrates mixed quantization with entropy coding to reduce storage overhead while maintaining model accuracy. Our method applies a layer-wise mixed quantization scheme - choosing between symmetric and asymmetric quantization based on individual layer weight distributions - to optimize compressibility. We then employ Huffman encoding for lossless compression of the quantized weights, significantly reducing memory bandwidth requirements. Furthermore, we introduce parallel Huffman decoding, which enables efficient retrieval of encoded weights during inference, ensuring minimal latency impact. Our experiments on edge-compatible LLMs, including smolLM-1.7B-Instruct, phi3-mini-4k-Instruct, and mistral-7B-Instruct, demonstrate that EntroLLM achieves up to 30%30\%30% storage reduction compared to uint8 models and up to 6565%65 storage reduction compared to uint4 models, while preserving perplexity and accuracy, on language benchmark tasks. We further show that our method enables 31.9%31.9\%31.9% - 146.6%146.6\%146.6% faster inference throughput on memory-bandwidth-limited edge devices, such as NVIDIA Jetson P3450, by reducing the required data movement. The proposed approach requires no additional re-training and is fully compatible with existing post-training quantization methods, making it a practical solution for edge LLMs.

View on arXiv
@article{sanyal2025_2505.02380,
  title={ EntroLLM: Entropy Encoded Weight Compression for Efficient Large Language Model Inference on Edge Devices },
  author={ Arnab Sanyal and Prithwish Mukherjee and Gourav Datta and Sandeep P. Chinchali },
  journal={arXiv preprint arXiv:2505.02380},
  year={ 2025 }
}
Comments on this paper