Optimizing LLMs for Resource-Constrained Environments: A Survey of Model Compression Techniques

Large Language Models (LLMs) have revolutionized many areas of artificial intelligence (AI), but their substantial resource requirements limit their deployment on mobile and edge devices. This survey paper provides a comprehensive overview of techniques for compressing LLMs to enable efficient inference in resource-constrained environments. We examine three primary approaches: Knowledge Distillation, Model Quantization, and Model Pruning. For each technique, we discuss the underlying principles, present different variants, and provide examples of successful applications. We also briefly discuss complementary techniques such as mixture-of-experts and early-exit strategies. Finally, we highlight promising future directions, aiming to provide a valuable resource for both researchers and practitioners seeking to optimize LLMs for edge deployment.
View on arXiv@article{girija2025_2505.02309, title={ Optimizing LLMs for Resource-Constrained Environments: A Survey of Model Compression Techniques }, author={ Sanjay Surendranath Girija and Shashank Kapoor and Lakshit Arora and Dipen Pradhan and Aman Raj and Ankit Shetgaonkar }, journal={arXiv preprint arXiv:2505.02309}, year={ 2025 } }