ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.06773
39
0

Evaluating Zero-Shot Long-Context LLM Compression

10 June 2024
Chenyu Wang
Yihan Wang
Kai Li
ArXivPDFHTML
Abstract

This study evaluates the effectiveness of zero-shot compression techniques on large language models (LLMs) under long-context. We identify the tendency for computational errors to increase under long-context when employing certain compression methods. We propose a hypothesis to explain the varied behavior of different LLM compression techniques and explore remedies to mitigate the performance decline observed in some techniques under long-context. This is a course report for COS 598D Machine Learning and Systems by Prof. Kai Li at Princeton University. Due to limited computational resources, our experiments were conducted only on LLaMA-2-7B-32K.

View on arXiv
@article{wang2025_2406.06773,
  title={ Evaluating Zero-Shot Long-Context LLM Compression },
  author={ Chenyu Wang and Yihan Wang and Kai Li },
  journal={arXiv preprint arXiv:2406.06773},
  year={ 2025 }
}
Comments on this paper