Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2408.01890
Cited By
Cross-layer Attention Sharing for Large Language Models
4 August 2024
Yongyu Mu
Yuzhang Wu
Yuchun Fan
Chenglong Wang
Hengyu Li
Qiaozhi He
Murun Yang
Tong Xiao
Jingbo Zhu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Cross-layer Attention Sharing for Large Language Models"
8 / 8 papers shown
Title
Cognitive Memory in Large Language Models
Lianlei Shan
Shixian Luo
Zezhou Zhu
Yu Yuan
Yong Wu
LLMAG
KELM
69
1
0
03 Apr 2025
Tensor Product Attention Is All You Need
Yifan Zhang
Yifeng Liu
Huizhuo Yuan
Zhen Qin
Yang Yuan
Q. Gu
Andrew Chi-Chih Yao
72
8
0
11 Jan 2025
A Simple and Effective
L
2
L_2
L
2
Norm-Based Strategy for KV Cache Compression
Alessio Devoto
Yu Zhao
Simone Scardapane
Pasquale Minervini
MQ
35
23
0
17 Jun 2024
PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information Funneling
Zefan Cai
Yichi Zhang
Bofei Gao
Yuliang Liu
Tianyu Liu
...
Wayne Xiong
Yue Dong
Baobao Chang
Junjie Hu
Wen Xiao
55
83
0
04 Jun 2024
CHAI: Clustered Head Attention for Efficient LLM Inference
Saurabh Agarwal
Bilge Acun
Basil Homer
Mostafa Elhoushi
Yejin Lee
Shivaram Venkataraman
Dimitris Papailiopoulos
Carole-Jean Wu
36
8
0
12 Mar 2024
I-BERT: Integer-only BERT Quantization
Sehoon Kim
A. Gholami
Z. Yao
Michael W. Mahoney
Kurt Keutzer
MQ
86
332
0
05 Jan 2021
Efficient Content-Based Sparse Attention with Routing Transformers
Aurko Roy
M. Saffar
Ashish Vaswani
David Grangier
MoE
228
578
0
12 Mar 2020
Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT
Sheng Shen
Zhen Dong
Jiayu Ye
Linjian Ma
Z. Yao
A. Gholami
Michael W. Mahoney
Kurt Keutzer
MQ
217
571
0
12 Sep 2019
1