325
v1v2 (latest)

ILRe: Intermediate Layer Retrieval for Context Compression in Causal Language Models

Main:11 Pages
6 Figures
Bibliography:2 Pages
6 Tables
Appendix:3 Pages
Abstract

Large Language Models (LLMs) have demonstrated success across many benchmarks. However, they still exhibit limitations in long-context scenarios, primarily due to their short effective context length, quadratic computational complexity, and high memory overhead when processing lengthy inputs. To mitigate these issues, we introduce a novel context compression pipeline, called Intermediate Layer Retrieval (ILRe), which determines one intermediate decoder layer offline, encodes context by streaming chunked prefill only up to that layer, and recalls tokens by the attention scores between the input query and full key cache in that specified layer. In particular, we propose a multi-pooling kernels allocating strategy in the token recalling process to maintain the completeness of semantics. Our approach not only reduces the prefilling complexity from O(L2)O(L^2) to O(L)O(L) and trims the memory footprint to a few tenths of that required for the full context, but also delivers performance comparable to or superior to the full-context setup in long-context scenarios. Without additional post training or operator development, ILRe can process a single 1M1M tokens request in less than half a minute (speedup 180×\approx 180\times) and scores RULER-1M1M benchmark of 79.8\approx 79.8 with model Llama-3.1-UltraLong-8B-1M-Instruct on a Huawei Ascend 910B NPU.

View on arXiv
Comments on this paper