Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.01283
Cited By
Focus on the Core: Efficient Attention via Pruned Token Compression for Document Classification
3 June 2024
Jungmin Yun
Mihyeon Kim
Youngbin Kim
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Focus on the Core: Efficient Attention via Pruned Token Compression for Document Classification"
7 / 7 papers shown
Title
Saliency-driven Dynamic Token Pruning for Large Language Models
Yao Tao
Yehui Tang
Yun Wang
Mingjian Zhu
Hailin Hu
Yunhe Wang
25
0
0
06 Apr 2025
Selective Attention Improves Transformer
Yaniv Leviathan
Matan Kalman
Yossi Matias
35
8
0
03 Oct 2024
Accelerating Transformers with Spectrum-Preserving Token Merging
Hoai-Chau Tran
D. M. Nguyen
Duy M. Nguyen
Trung Thanh Nguyen
Ngan Le
Pengtao Xie
Daniel Sonntag
James Y. Zou
Binh T. Nguyen
Mathias Niepert
14
8
0
25 May 2024
LexGLUE: A Benchmark Dataset for Legal Language Understanding in English
Ilias Chalkidis
Abhik Jana
D. Hartung
M. Bommarito
Ion Androutsopoulos
Daniel Martin Katz
Nikolaos Aletras
AILaw
ELM
120
244
0
03 Oct 2021
ERNIE-Doc: A Retrospective Long-Document Modeling Transformer
Siyu Ding
Junyuan Shang
Shuohuan Wang
Yu Sun
Hao Tian
Hua-Hong Wu
Haifeng Wang
58
52
0
31 Dec 2020
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
246
1,982
0
28 Jul 2020
Categorical Reparameterization with Gumbel-Softmax
Eric Jang
S. Gu
Ben Poole
BDL
69
5,262
0
03 Nov 2016
1