ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.04006
  4. Cited By
Long Range Arena: A Benchmark for Efficient Transformers

Long Range Arena: A Benchmark for Efficient Transformers

8 November 2020
Yi Tay
Mostafa Dehghani
Samira Abnar
Yikang Shen
Dara Bahri
Philip Pham
J. Rao
Liu Yang
Sebastian Ruder
Donald Metzler
ArXivPDFHTML

Papers citing "Long Range Arena: A Benchmark for Efficient Transformers"

50 / 134 papers shown
Title
Efficient Unstructured Pruning of Mamba State-Space Models for Resource-Constrained Environments
Efficient Unstructured Pruning of Mamba State-Space Models for Resource-Constrained Environments
Ibne Farabi Shihab
Sanjeda Akter
Anuj Sharma
Mamba
36
0
0
13 May 2025
BLAB: Brutally Long Audio Bench
BLAB: Brutally Long Audio Bench
Orevaoghene Ahia
Martijn Bartelds
Kabir Ahuja
Hila Gonen
Valentin Hofmann
...
Noah Bennett
Shinji Watanabe
Noah A. Smith
Yulia Tsvetkov
Sachin Kumar
AuLLM
LM&MA
VLM
53
0
0
05 May 2025
Rethinking Memory in AI: Taxonomy, Operations, Topics, and Future Directions
Rethinking Memory in AI: Taxonomy, Operations, Topics, and Future Directions
Yiming Du
Wenyu Huang
Danna Zheng
Zhaowei Wang
Sébastien Montella
Mirella Lapata
Kam-Fai Wong
Jeff Z. Pan
KELM
MU
71
1
0
01 May 2025
From S4 to Mamba: A Comprehensive Survey on Structured State Space Models
From S4 to Mamba: A Comprehensive Survey on Structured State Space Models
Shriyank Somvanshi
Md Monzurul Islam
Mahmuda Sultana Mimi
Sazzad Bin Bashar Polock
Gaurab Chhetri
Subasish Das
Mamba
AI4TS
40
0
0
22 Mar 2025
A Large Recurrent Action Model: xLSTM enables Fast Inference for Robotics Tasks
A Large Recurrent Action Model: xLSTM enables Fast Inference for Robotics Tasks
Thomas Schmied
Thomas Adler
Vihang Patil
M. Beck
Korbinian Poppel
Johannes Brandstetter
G. Klambauer
Razvan Pascanu
Sepp Hochreiter
70
4
0
21 Feb 2025
On the Promise for Assurance of Differentiable Neurosymbolic Reasoning Paradigms
On the Promise for Assurance of Differentiable Neurosymbolic Reasoning Paradigms
Luke E. Richards
Jessie Yaros
Jasen Babcock
Coung Ly
Robin Cosbey
Timothy Doster
Cynthia Matuszek
NAI
64
0
0
13 Feb 2025
HadamRNN: Binary and Sparse Ternary Orthogonal RNNs
HadamRNN: Binary and Sparse Ternary Orthogonal RNNs
Armand Foucault
Franck Mamalet
François Malgouyres
MQ
69
0
0
28 Jan 2025
PolaFormer: Polarity-aware Linear Attention for Vision Transformers
Weikang Meng
Yadan Luo
Xin Li
D. Jiang
Zheng Zhang
91
0
0
25 Jan 2025
ZETA: Leveraging Z-order Curves for Efficient Top-k Attention
ZETA: Leveraging Z-order Curves for Efficient Top-k Attention
Qiuhao Zeng
Jerry Huang
Peng Lu
Gezheng Xu
Boxing Chen
Charles X. Ling
Boyu Wang
47
1
0
24 Jan 2025
ELITR-Bench: A Meeting Assistant Benchmark for Long-Context Language Models
ELITR-Bench: A Meeting Assistant Benchmark for Long-Context Language Models
Thibaut Thonet
Jos Rozen
Laurent Besacier
RALM
132
2
0
20 Jan 2025
Layer-Adaptive State Pruning for Deep State Space Models
Layer-Adaptive State Pruning for Deep State Space Models
Minseon Gwak
Seongrok Moon
Joohwan Ko
PooGyeon Park
25
0
0
05 Nov 2024
Lambda-Skip Connections: the architectural component that prevents Rank Collapse
Lambda-Skip Connections: the architectural component that prevents Rank Collapse
Federico Arangath Joseph
Jerome Sieber
M. Zeilinger
Carmen Amo Alonso
33
0
0
14 Oct 2024
Drama: Mamba-Enabled Model-Based Reinforcement Learning Is Sample and Parameter Efficient
Drama: Mamba-Enabled Model-Based Reinforcement Learning Is Sample and Parameter Efficient
Wenlong Wang
Ivana Dusparic
Yucheng Shi
Ke Zhang
V. Cahill
Mamba
86
0
0
11 Oct 2024
LongGenBench: Long-context Generation Benchmark
LongGenBench: Long-context Generation Benchmark
Xiang Liu
Peijie Dong
Xuming Hu
Xiaowen Chu
RALM
43
8
0
05 Oct 2024
S7: Selective and Simplified State Space Layers for Sequence Modeling
S7: Selective and Simplified State Space Layers for Sequence Modeling
Taylan Soydan
Nikola Zubić
Nico Messikommer
Siddhartha Mishra
Davide Scaramuzza
35
4
0
04 Oct 2024
Dolphin: A Programmable Framework for Scalable Neurosymbolic Learning
Dolphin: A Programmable Framework for Scalable Neurosymbolic Learning
Aaditya Naik
Jason Liu
Claire Wang
Saikat Dutta
Mayur Naik
Mayur Naik
Eric Wong
26
1
0
04 Oct 2024
Mitigating Memorization In Language Models
Mitigating Memorization In Language Models
Mansi Sakarvadia
Aswathy Ajith
Arham Khan
Nathaniel Hudson
Caleb Geniesse
Kyle Chard
Yaoqing Yang
Ian Foster
Michael W. Mahoney
KELM
MU
50
0
0
03 Oct 2024
HELMET: How to Evaluate Long-Context Language Models Effectively and Thoroughly
HELMET: How to Evaluate Long-Context Language Models Effectively and Thoroughly
Howard Yen
Tianyu Gao
Minmin Hou
Ke Ding
Daniel Fleischer
Peter Izsak
Moshe Wasserblat
Danqi Chen
ALM
ELM
56
25
0
03 Oct 2024
Tuning Frequency Bias of State Space Models
Tuning Frequency Bias of State Space Models
Annan Yu
Dongwei Lyu
S. H. Lim
Michael W. Mahoney
N. Benjamin Erichson
38
2
0
02 Oct 2024
Flash STU: Fast Spectral Transform Units
Flash STU: Fast Spectral Transform Units
Y. Isabel Liu
Windsor Nguyen
Yagiz Devre
Evan Dogariu
Anirudha Majumdar
Elad Hazan
AI4TS
64
1
0
16 Sep 2024
Sampling Foundational Transformer: A Theoretical Perspective
Sampling Foundational Transformer: A Theoretical Perspective
Viet Anh Nguyen
Minh Lenhat
Khoa Nguyen
Duong Duc Hieu
Dao Huu Hung
Truong Son-Hy
42
0
0
11 Aug 2024
AnnotatedTables: A Large Tabular Dataset with Language Model Annotations
AnnotatedTables: A Large Tabular Dataset with Language Model Annotations
Yaojie Hu
Ilias Fountalis
Jin Tian
N. Vasiloglou
LMTD
32
3
0
24 Jun 2024
BABILong: Testing the Limits of LLMs with Long Context
  Reasoning-in-a-Haystack
BABILong: Testing the Limits of LLMs with Long Context Reasoning-in-a-Haystack
Yuri Kuratov
Aydar Bulatov
Petr Anokhin
Ivan Rodkin
Dmitry Sorokin
Artyom Sorokin
Mikhail Burtsev
RALM
ALM
LRM
ReLM
ELM
42
58
0
14 Jun 2024
SWAT: Scalable and Efficient Window Attention-based Transformers
  Acceleration on FPGAs
SWAT: Scalable and Efficient Window Attention-based Transformers Acceleration on FPGAs
Zhenyu Bai
Pranav Dangi
Huize Li
Tulika Mitra
26
5
0
27 May 2024
Understanding the differences in Foundation Models: Attention, State
  Space Models, and Recurrent Neural Networks
Understanding the differences in Foundation Models: Attention, State Space Models, and Recurrent Neural Networks
Jerome Sieber
Carmen Amo Alonso
A. Didier
M. Zeilinger
Antonio Orvieto
AAML
42
8
0
24 May 2024
Whole Genome Transformer for Gene Interaction Effects in Microbiome Habitat Specificity
Whole Genome Transformer for Gene Interaction Effects in Microbiome Habitat Specificity
Zhufeng Li
S. S. Cranganore
Nicholas D. Youngblut
Niki Kilbertus
34
2
0
09 May 2024
Optimizing Universal Lesion Segmentation: State Space Model-Guided
  Hierarchical Networks with Feature Importance Adjustment
Optimizing Universal Lesion Segmentation: State Space Model-Guided Hierarchical Networks with Feature Importance Adjustment
Kazi Shahriar Sanjid
Md. Tanzim Hossain
Md. Shakib Shahariar Junayed
M. M. Uddin
Mamba
35
2
0
26 Apr 2024
GenEARL: A Training-Free Generative Framework for Multimodal Event
  Argument Role Labeling
GenEARL: A Training-Free Generative Framework for Multimodal Event Argument Role Labeling
Hritik Bansal
Po-Nien Kung
P. Brantingham
Weisheng Wang
Miao Zheng
VLM
24
1
0
07 Apr 2024
NovelQA: Benchmarking Question Answering on Documents Exceeding 200K Tokens
NovelQA: Benchmarking Question Answering on Documents Exceeding 200K Tokens
Cunxiang Wang
Ruoxi Ning
Boqi Pan
Tonghui Wu
Qipeng Guo
...
Guangsheng Bao
Xiangkun Hu
Zheng Zhang
Qian Wang
Yue Zhang
RALM
80
3
0
18 Mar 2024
Assessing the importance of long-range correlations for
  deep-learning-based sleep staging
Assessing the importance of long-range correlations for deep-learning-based sleep staging
Tiezhi Wang
Nils Strodthoff
27
1
0
22 Feb 2024
Investigating Recurrent Transformers with Dynamic Halt
Investigating Recurrent Transformers with Dynamic Halt
Jishnu Ray Chowdhury
Cornelia Caragea
34
1
0
01 Feb 2024
Input Convex Lipschitz RNN: A Fast and Robust Approach for Engineering Tasks
Input Convex Lipschitz RNN: A Fast and Robust Approach for Engineering Tasks
Zihao Wang
Zhen Wu
18
3
0
15 Jan 2024
MIMONets: Multiple-Input-Multiple-Output Neural Networks Exploiting
  Computation in Superposition
MIMONets: Multiple-Input-Multiple-Output Neural Networks Exploiting Computation in Superposition
Nicolas Menet
Michael Hersche
G. Karunaratne
Luca Benini
Abu Sebastian
Abbas Rahimi
23
13
0
05 Dec 2023
Continuous Convolutional Neural Networks for Disruption Prediction in
  Nuclear Fusion Plasmas
Continuous Convolutional Neural Networks for Disruption Prediction in Nuclear Fusion Plasmas
William Arnold
Lucas Spangher
Christina Rea
MU
13
2
0
03 Dec 2023
StableSSM: Alleviating the Curse of Memory in State-space Models through
  Stable Reparameterization
StableSSM: Alleviating the Curse of Memory in State-space Models through Stable Reparameterization
Shida Wang
Qianxiao Li
17
13
0
24 Nov 2023
netFound: Foundation Model for Network Security
netFound: Foundation Model for Network Security
Satyandra Guthula
Navya Battula
Roman Beltiukov
Wenbo Guo
Arpit Gupta
Inder Monga
21
13
0
25 Oct 2023
Multilingual estimation of political-party positioning: From label
  aggregation to long-input Transformers
Multilingual estimation of political-party positioning: From label aggregation to long-input Transformers
Dmitry Nikolaev
Tanise Ceron
Sebastian Padó
16
1
0
19 Oct 2023
Learning to Rank Context for Named Entity Recognition Using a Synthetic
  Dataset
Learning to Rank Context for Named Entity Recognition Using a Synthetic Dataset
Arthur Amalvy
Vincent Labatut
Richard Dufour
26
9
0
16 Oct 2023
Do We Run How We Say We Run? Formalization and Practice of Governance in
  OSS Communities
Do We Run How We Say We Run? Formalization and Practice of Governance in OSS Communities
Mahasweta Chakraborti
Curtis Atkisson
Stefan Stanciulescu
V. Filkov
Seth Frey
31
5
0
25 Sep 2023
How to Protect Copyright Data in Optimization of Large Language Models?
How to Protect Copyright Data in Optimization of Large Language Models?
T. Chu
Zhao-quan Song
Chiwun Yang
28
29
0
23 Aug 2023
Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers
Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers
Tobias Christian Nauen
Sebastián M. Palacio
Federico Raue
Andreas Dengel
35
3
0
18 Aug 2023
Spherical and Hyperbolic Toric Topology-Based Codes On Graph Embedding
  for Ising MRF Models: Classical and Quantum Topology Machine Learning
Spherical and Hyperbolic Toric Topology-Based Codes On Graph Embedding for Ising MRF Models: Classical and Quantum Topology Machine Learning
V. Usatyuk
Sergey Egorov
Denis Sapozhnikov
15
3
0
28 Jul 2023
Fading memory as inductive bias in residual recurrent networks
Fading memory as inductive bias in residual recurrent networks
I. Dubinin
Felix Effenberger
36
4
0
27 Jul 2023
LongNet: Scaling Transformers to 1,000,000,000 Tokens
LongNet: Scaling Transformers to 1,000,000,000 Tokens
Jiayu Ding
Shuming Ma
Li Dong
Xingxing Zhang
Shaohan Huang
Wenhui Wang
Nanning Zheng
Furu Wei
CLL
35
151
0
05 Jul 2023
MNISQ: A Large-Scale Quantum Circuit Dataset for Machine Learning on/for
  Quantum Computers in the NISQ era
MNISQ: A Large-Scale Quantum Circuit Dataset for Machine Learning on/for Quantum Computers in the NISQ era
Leonardo Placidi
Ryuichiro Hataya
Toshio Mori
Koki Aoyama
Hayata Morisaki
K. Mitarai
Keisuke Fujii
15
8
0
29 Jun 2023
When to Use Efficient Self Attention? Profiling Text, Speech and Image
  Transformer Variants
When to Use Efficient Self Attention? Profiling Text, Speech and Image Transformer Variants
Anuj Diwan
Eunsol Choi
David F. Harwath
39
0
0
14 Jun 2023
Focus Your Attention (with Adaptive IIR Filters)
Focus Your Attention (with Adaptive IIR Filters)
Shahar Lutati
Itamar Zimerman
Lior Wolf
27
9
0
24 May 2023
RWKV: Reinventing RNNs for the Transformer Era
RWKV: Reinventing RNNs for the Transformer Era
Bo Peng
Eric Alcaide
Quentin G. Anthony
Alon Albalak
Samuel Arcadinho
...
Qihang Zhao
P. Zhou
Qinghua Zhou
Jian Zhu
Rui-Jie Zhu
72
554
0
22 May 2023
CageViT: Convolutional Activation Guided Efficient Vision Transformer
CageViT: Convolutional Activation Guided Efficient Vision Transformer
Hao Zheng
Jinbao Wang
Xiantong Zhen
H. Chen
Jingkuan Song
Feng Zheng
ViT
10
0
0
17 May 2023
Ray-Patch: An Efficient Querying for Light Field Transformers
Ray-Patch: An Efficient Querying for Light Field Transformers
T. B. Martins
Javier Civera
ViT
29
0
0
16 May 2023
123
Next