39
0

EdgeInfinite: A Memory-Efficient Infinite-Context Transformer for Edge Devices

Abstract

Transformer-based large language models (LLMs) encounter challenges in processing long sequences on edge devices due to the quadratic complexity of attention mechanisms and growing memory demands from Key-Value (KV) cache. Existing KV cache optimizations struggle with irreversible token eviction in long-output tasks, while alternative sequence modeling architectures prove costly to adopt within established Transformer infrastructure. We present EdgeInfinite, a memory-efficient solution for infinite contexts that integrates compressed memory into Transformer-based LLMs through a trainable memory-gating module. This approach maintains full compatibility with standard Transformer architectures, requiring fine-tuning only a small part of parameters, and enables selective activation of the memory-gating module for long and short context task routing. The experimental result shows that EdgeInfinite achieves comparable performance to baseline Transformer-based LLM on long context benchmarks while optimizing memory consumption and time to first token.

View on arXiv
@article{chen2025_2503.22196,
  title={ EdgeInfinite: A Memory-Efficient Infinite-Context Transformer for Edge Devices },
  author={ Jiyu Chen and Shuang Peng and Daxiong Luo and Fan Yang and Renshou Wu and Fangyuan Li and Xiaoxin Chen },
  journal={arXiv preprint arXiv:2503.22196},
  year={ 2025 }
}
Comments on this paper