This study evaluates the effectiveness of zero-shot compression techniques on large language models (LLMs) under long-context. We identify the tendency for computational errors to increase under long-context when employing certain compression methods. We propose a hypothesis to explain the varied behavior of different LLM compression techniques and explore remedies to mitigate the performance decline observed in some techniques under long-context. This is a course report for COS 598D Machine Learning and Systems by Prof. Kai Li at Princeton University. Due to limited computational resources, our experiments were conducted only on LLaMA-2-7B-32K.
View on arXiv@article{wang2025_2406.06773, title={ Evaluating Zero-Shot Long-Context LLM Compression }, author={ Chenyu Wang and Yihan Wang and Kai Li }, journal={arXiv preprint arXiv:2406.06773}, year={ 2025 } }