Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1812.01243
Cited By
Efficient Attention: Attention with Linear Complexities
4 December 2018
Zhuoran Shen
Mingyuan Zhang
Haiyu Zhao
Shuai Yi
Hongsheng Li
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Efficient Attention: Attention with Linear Complexities"
50 / 77 papers shown
Title
Revisiting Reset Mechanisms in Spiking Neural Networks for Sequential Modeling: Specialized Discretization for Binary Activated RNN
Enqi Zhang
MQ
134
0
0
24 Apr 2025
Mamba-3D as Masked Autoencoders for Accurate and Data-Efficient Analysis of Medical Ultrasound Videos
Jiaheng Zhou
Yanfeng Zhou
Wei Fang
Yuxing Tang
Le Lu
Ge Yang
Mamba
193
0
0
26 Mar 2025
Enhancing Layer Attention Efficiency through Pruning Redundant Retrievals
Hanze Li
Xiande Huang
39
0
0
09 Mar 2025
X-SG
2
^2
2
S: Safe and Generalizable Gaussian Splatting with X-dimensional Watermarks
Z. Cheng
Huiping Zhuang
Chun Li
Xin Meng
Ming Li
Fei Richard Yu
Liqiang Nie
3DGS
58
0
0
13 Feb 2025
Video Latent Flow Matching: Optimal Polynomial Projections for Video Interpolation and Extrapolation
Yang Cao
Zhao-quan Song
Chiwun Yang
VGen
44
2
0
01 Feb 2025
PolaFormer: Polarity-aware Linear Attention for Vision Transformers
Weikang Meng
Yadan Luo
Xin Li
D. Jiang
Zheng Zhang
130
0
0
25 Jan 2025
ZETA: Leveraging Z-order Curves for Efficient Top-k Attention
Qiuhao Zeng
Jerry Huang
Peng Lu
Gezheng Xu
Boxing Chen
Charles X. Ling
Boyu Wang
49
1
0
24 Jan 2025
Parallel Sequence Modeling via Generalized Spatial Propagation Network
Hongjun Wang
Wonmin Byeon
Jiarui Xu
Jinwei Gu
Ka Chun Cheung
Xiaolong Wang
Kai Han
Jan Kautz
Sifei Liu
137
0
0
21 Jan 2025
Generative Retrieval for Book search
Yubao Tang
Ruqing Zhang
J. Guo
Maarten de Rijke
Shihao Liu
S. Wang
Dawei Yin
Xueqi Cheng
RALM
31
0
0
19 Jan 2025
Fast Gradient Computation for RoPE Attention in Almost Linear Time
Yifang Chen
Jiayan Huo
Xiaoyu Li
Yingyu Liang
Zhenmei Shi
Zhao-quan Song
61
11
0
03 Jan 2025
Hadamard Attention Recurrent Transformer: A Strong Baseline for Stereo Matching Transformer
Ziyang Chen
Yongjun Zhang
Wenting Li
Bingshu Wang
Yabo Wu
Yong Zhao
C. L. P. Chen
46
0
0
02 Jan 2025
IRFusionFormer: Enhancing Pavement Crack Segmentation with RGB-T Fusion and Topological-Based Loss
Ruiqiang Xiao
Xiaohu Chen
34
0
0
31 Dec 2024
GrokFormer: Graph Fourier Kolmogorov-Arnold Transformers
Guoguo Ai
Guansong Pang
Hezhe Qiao
Yuan Gao
Hui Yan
67
0
0
26 Nov 2024
Gotta Hear Them All: Sound Source Aware Vision to Audio Generation
Wei Guo
Heng Wang
Jianbo Ma
Weidong Cai
DiffM
85
3
0
23 Nov 2024
Breaking the Low-Rank Dilemma of Linear Attention
Qihang Fan
Huaibo Huang
Ran He
40
0
0
12 Nov 2024
SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers
Enze Xie
Junsong Chen
Junyu Chen
Han Cai
Haotian Tang
...
Zhekai Zhang
Muyang Li
Ligeng Zhu
Y. Lu
Song Han
VLM
40
49
0
14 Oct 2024
Selective Attention Improves Transformer
Yaniv Leviathan
Matan Kalman
Yossi Matias
49
8
0
03 Oct 2024
TransDAE: Dual Attention Mechanism in a Hierarchical Transformer for Efficient Medical Image Segmentation
Bobby Azad
Pourya Adibfar
Kaiqun Fu
ViT
MedIm
24
0
0
03 Sep 2024
PADRe: A Unifying Polynomial Attention Drop-in Replacement for Efficient Vision Transformer
Pierre-David Létourneau
Manish Kumar Singh
Hsin-Pai Cheng
Shizhong Han
Yunxiao Shi
Dalton Jones
M. H. Langston
Hong Cai
Fatih Porikli
34
0
0
16 Jul 2024
CrowdMoGen: Zero-Shot Text-Driven Collective Motion Generation
Yukang Cao
Xinying Guo
Mingyuan Zhang
Haozhe Xie
Chenyang Gu
Ziwei Liu
54
2
0
08 Jul 2024
Accelerating Transformers with Spectrum-Preserving Token Merging
Hoai-Chau Tran
D. M. Nguyen
Duy M. Nguyen
Trung Thanh Nguyen
Ngan Le
Pengtao Xie
Daniel Sonntag
James Y. Zou
Binh T. Nguyen
Mathias Niepert
32
8
0
25 May 2024
Dinomaly: The Less Is More Philosophy in Multi-Class Unsupervised Anomaly Detection
Jia Guo
Shuai Lu
Weihang Zhang
Huiqi Li
Huiqi Li
Hongen Liao
ViT
61
7
0
23 May 2024
Retrievable Domain-Sensitive Feature Memory for Multi-Domain Recommendation
Yuang Zhao
Zhaocheng Du
Qinglin Jia
Linxuan Zhang
Zhenhua Dong
Ruiming Tang
30
2
0
21 May 2024
Asymptotic theory of in-context learning by linear attention
Yue M. Lu
Mary I. Letey
Jacob A. Zavatone-Veth
Anindita Maiti
C. Pehlevan
19
10
0
20 May 2024
Folded Context Condensation in Path Integral Formalism for Infinite Context Transformers
Won-Gi Paeng
Daesuk Kwon
Kyungwon Jeong
Honggyo Suh
63
0
0
07 May 2024
Enhancing Efficiency in Vision Transformer Networks: Design Techniques and Insights
Moein Heidari
Reza Azad
Sina Ghorbani Kolahi
René Arimond
Leon Niggemeier
...
Afshin Bozorgpour
Ehsan Khodapanah Aghdam
A. Kazerouni
I. Hacihaliloglu
Dorit Merhof
41
7
0
28 Mar 2024
Learning to See Through Dazzle
Xiaopeng Peng
Erin F. Fleet
A. Watnik
Grover A. Swartzlander
GAN
AAML
25
4
0
24 Feb 2024
Universal Physics Transformers: A Framework For Efficiently Scaling Neural Operators
Benedikt Alkin
Andreas Fürst
Simon Schmid
Lukas Gruber
Markus Holzleitner
Johannes Brandstetter
PINN
AI4CE
42
8
0
19 Feb 2024
Graph Convolutions Enrich the Self-Attention in Transformers!
Jeongwhan Choi
Hyowon Wi
Jayoung Kim
Yehjin Shin
Kookjin Lee
Nathaniel Trask
Noseong Park
25
4
0
07 Dec 2023
Skin Lesion Segmentation Improved by Transformer-based Networks with Inter-scale Dependency Modeling
Sania Eskandari
Janet Lumpp
Luis Gonzalo Sánchez Giraldo
ViT
MedIm
24
7
0
20 Oct 2023
LightGrad: Lightweight Diffusion Probabilistic Model for Text-to-Speech
Jing Chen
Xingcheng Song
Zhendong Peng
Binbin Zhang
Fuping Pan
Zhiyong Wu
DiffM
19
16
0
31 Aug 2023
MLIC++: Linear Complexity Multi-Reference Entropy Modeling for Learned Image Compression
Wei Jiang
Jiayu Yang
Yongqi Zhai
Feng Gao
Ronggang Wang
39
32
0
28 Jul 2023
Divert More Attention to Vision-Language Object Tracking
Mingzhe Guo
Zhipeng Zhang
Li Jing
Haibin Ling
Heng Fan
VLM
26
3
0
19 Jul 2023
Spike-driven Transformer
Man Yao
Jiakui Hu
Zhaokun Zhou
Liuliang Yuan
Yonghong Tian
Boxing Xu
Guoqi Li
34
112
0
04 Jul 2023
TopicFM+: Boosting Accuracy and Efficiency of Topic-Assisted Feature Matching
Khang Truong Giang
Soohwan Song
Sung-Guk Jo
24
3
0
02 Jul 2023
SP-BatikGAN: An Efficient Generative Adversarial Network for Symmetric Pattern Generation
Chrystian
Wahyono
GAN
13
3
0
19 Apr 2023
Efficient Joint Learning for Clinical Named Entity Recognition and Relation Extraction Using Fourier Networks: A Use Case in Adverse Drug Events
A. Yazdani
D. Proios
H. Rouhizadeh
Douglas Teodoro
19
7
0
08 Feb 2023
LDMIC: Learning-based Distributed Multi-view Image Coding
Xinjie Zhang
Jiawei Shao
Jun Zhang
22
17
0
24 Jan 2023
Lightweight Structure-Aware Attention for Visual Understanding
Heeseung Kwon
F. M. Castro
M. Marín-Jiménez
N. Guil
Alahari Karteek
26
2
0
29 Nov 2022
Can denoising diffusion probabilistic models generate realistic astrophysical fields?
N. Mudur
D. Finkbeiner
DiffM
16
15
0
22 Nov 2022
Breaking Free from Fusion Rule: A Fully Semantic-driven Infrared and Visible Image Fusion
Yuhui Wu
Zhu Liu
Jinyuan Liu
Xin-Yue Fan
Risheng Liu
29
12
0
22 Nov 2022
BiViT: Extremely Compressed Binary Vision Transformer
Yefei He
Zhenyu Lou
Luoming Zhang
Jing Liu
Weijia Wu
Hong Zhou
Bohan Zhuang
ViT
MQ
18
28
0
14 Nov 2022
ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention
Jyotikrishna Dass
Shang Wu
Huihong Shi
Chaojian Li
Zhifan Ye
Zhongfeng Wang
Yingyan Lin
17
49
0
09 Nov 2022
Token Merging: Your ViT But Faster
Daniel Bolya
Cheng-Yang Fu
Xiaoliang Dai
Peizhao Zhang
Christoph Feichtenhofer
Judy Hoffman
MoMe
28
417
0
17 Oct 2022
CAB: Comprehensive Attention Benchmarking on Long Sequence Modeling
Jinchao Zhang
Shuyang Jiang
Jiangtao Feng
Lin Zheng
Lingpeng Kong
3DV
39
9
0
14 Oct 2022
Attention Enhanced Citrinet for Speech Recognition
Xianchao Wu
8
1
0
01 Sep 2022
Deep Sparse Conformer for Speech Recognition
Xianchao Wu
14
2
0
01 Sep 2022
Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization
T. Nguyen
Richard G. Baraniuk
Robert M. Kirby
Stanley J. Osher
Bao Wang
21
9
0
01 Aug 2022
Pure Transformers are Powerful Graph Learners
Jinwoo Kim
Tien Dat Nguyen
Seonwoo Min
Sungjun Cho
Moontae Lee
Honglak Lee
Seunghoon Hong
38
187
0
06 Jul 2022
Divert More Attention to Vision-Language Tracking
Mingzhe Guo
Zhipeng Zhang
Heng Fan
Li Jing
21
53
0
03 Jul 2022
1
2
Next