ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.19114
52
0

Understanding and Improving Information Preservation in Prompt Compression for LLMs

24 March 2025
Weronika Łajewska
Momchil Hardalov
Laura Aina
Neha Anna John
Hang Su
Lluís Marquez
ArXivPDFHTML
Abstract

Recent advancements in large language models (LLMs) have enabled their successful application to a broad range of tasks. However, in information-intensive tasks, the prompt length can grow fast, leading to increased computational requirements, performance degradation, and induced biases from irrelevant or redundant information. Recently, various prompt compression techniques have been introduced to optimize the trade-off between reducing input length and retaining performance. We propose a holistic evaluation framework that allows for in-depth analysis of prompt compression methods. We focus on three key aspects, besides compression ratio: (i) downstream task performance, (ii) grounding in the input context, and (iii) information preservation. Through this framework, we investigate state-of-the-art soft and hard compression methods, showing that they struggle to preserve key details from the original prompt, limiting their performance on complex tasks. We demonstrate that modifying soft prompting methods to control better the granularity of the compressed information can significantly improve their effectiveness -- up to +23\% in downstream task performance, more than +8 BERTScore points in grounding, and 2.7x more entities preserved in compression.

View on arXiv
@article{łajewska2025_2503.19114,
  title={ Understanding and Improving Information Preservation in Prompt Compression for LLMs },
  author={ Weronika Łajewska and Momchil Hardalov and Laura Aina and Neha Anna John and Hang Su and Lluís Màrquez },
  journal={arXiv preprint arXiv:2503.19114},
  year={ 2025 }
}
Comments on this paper