ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.18915
41
0

END: Early Noise Dropping for Efficient and Effective Context Denoising

26 February 2025
Hongye Jin
Pei Chen
Jingfeng Yang
Z. Wang
Meng-Long Jiang
Yifan Gao
Binxuan Huang
X. Zhang
Zheng Li
Tianyi Liu
Huasheng Li
Bing Yin
ArXivPDFHTML
Abstract

Large Language Models (LLMs) have demonstrated remarkable performance across a wide range of natural language processing tasks. However, they are often distracted by irrelevant or noisy context in input sequences that degrades output quality. This problem affects both long- and short-context scenarios, such as retrieval-augmented generation, table question-answering, and in-context learning. We reveal that LLMs can implicitly identify whether input sequences contain useful information at early layers, prior to token generation. Leveraging this insight, we introduce Early Noise Dropping (\textsc{END}), a novel approach to mitigate this issue without requiring fine-tuning the LLMs. \textsc{END} segments input sequences into chunks and employs a linear prober on the early layers of LLMs to differentiate between informative and noisy chunks. By discarding noisy chunks early in the process, \textsc{END} preserves critical information, reduces distraction, and lowers computational overhead. Extensive experiments demonstrate that \textsc{END} significantly improves both performance and efficiency across different LLMs on multiple evaluation datasets. Furthermore, by investigating LLMs' implicit understanding to the input with the prober, this work also deepens understanding of how LLMs do reasoning with contexts internally.

View on arXiv
@article{jin2025_2502.18915,
  title={ END: Early Noise Dropping for Efficient and Effective Context Denoising },
  author={ Hongye Jin and Pei Chen and Jingfeng Yang and Zhengyang Wang and Meng Jiang and Yifan Gao and Binxuan Huang and Xinyang Zhang and Zheng Li and Tianyi Liu and Huasheng Li and Bing Yin },
  journal={arXiv preprint arXiv:2502.18915},
  year={ 2025 }
}
Comments on this paper