A Systematic Study of Cross-Layer KV Sharing for Efficient LLM Inference

Recently, sharing key-value (KV) cache across layers has been found effective in efficient inference of large language models (LLMs). To systematically investigate different techniques of cross-layer KV sharing, we propose a unified framework that covers several recent methods and their novel variants. We conduct comprehensive experiments on all the configurations of the framework, evaluating their generation throughput and performance in language modeling and downstream tasks. We find that when reducing the size of the KV cache by 2, most configurations can achieve higher throughput than standard transformers while maintaining competitive performance. When further reducing the size of the KV cache, however, pairing queries of all layers with KVs of upper layers performs better, at the expense of additional training cost and prefilling latency. We hope that this work will help users make more informed choices of cross-layer KV sharing approaches and facilitate future research on efficient LLM inference.
View on arXiv@article{wu2025_2410.14442, title={ A Systematic Study of Cross-Layer KV Sharing for Efficient LLM Inference }, author={ You Wu and Haoyi Wu and Kewei Tu }, journal={arXiv preprint arXiv:2410.14442}, year={ 2025 } }