ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.08399
  4. Cited By
Relative Positional Encoding for Transformers with Linear Complexity

Relative Positional Encoding for Transformers with Linear Complexity

18 May 2021
Antoine Liutkus
Ondřej Cífka
Shih-Lun Wu
Umut Simsekli
Yi-Hsuan Yang
Gaël Richard
ArXivPDFHTML

Papers citing "Relative Positional Encoding for Transformers with Linear Complexity"

15 / 15 papers shown
Title
Positional Encoding in Transformer-Based Time Series Models: A Survey
Positional Encoding in Transformer-Based Time Series Models: A Survey
Habib Irani
Vangelis Metsis
AI4TS
46
0
0
17 Feb 2025
Efficient Attention via Control Variates
Efficient Attention via Control Variates
Lin Zheng
Jianbo Yuan
Chong-Jun Wang
Lingpeng Kong
24
18
0
09 Feb 2023
Learning a Fourier Transform for Linear Relative Positional Encodings in
  Transformers
Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers
K. Choromanski
Shanda Li
Valerii Likhosherstov
Kumar Avinava Dubey
Shengjie Luo
Di He
Yiming Yang
Tamás Sarlós
Thomas Weingarten
Adrian Weller
19
8
0
03 Feb 2023
Lightweight Structure-Aware Attention for Visual Understanding
Lightweight Structure-Aware Attention for Visual Understanding
Heeseung Kwon
F. M. Castro
M. Marín-Jiménez
N. Guil
Alahari Karteek
26
2
0
29 Nov 2022
Melody Infilling with User-Provided Structural Context
Melody Infilling with User-Provided Structural Context
Chih-Pin Tan
A. Su
Yi-Hsuan Yang
29
3
0
06 Oct 2022
FastRPB: a Scalable Relative Positional Encoding for Long Sequence Tasks
FastRPB: a Scalable Relative Positional Encoding for Long Sequence Tasks
Maksim Zubkov
Daniil Gavrilov
22
0
0
23 Feb 2022
cosFormer: Rethinking Softmax in Attention
cosFormer: Rethinking Softmax in Attention
Zhen Qin
Weixuan Sun
Huicai Deng
Dongxu Li
Yunshen Wei
Baohong Lv
Junjie Yan
Lingpeng Kong
Yiran Zhong
21
211
0
17 Feb 2022
Finding Strong Gravitational Lenses Through Self-Attention
Finding Strong Gravitational Lenses Through Self-Attention
H. Thuruthipilly
A. Zadrożny
Agnieszka Pollo
Marek Biesiada
11
6
0
18 Oct 2021
Ripple Attention for Visual Perception with Sub-quadratic Complexity
Ripple Attention for Visual Perception with Sub-quadratic Complexity
Lin Zheng
Huijie Pan
Lingpeng Kong
21
3
0
06 Oct 2021
MuseMorphose: Full-Song and Fine-Grained Piano Music Style Transfer with
  One Transformer VAE
MuseMorphose: Full-Song and Fine-Grained Piano Music Style Transfer with One Transformer VAE
Shih-Lun Wu
Yi-Hsuan Yang
ViT
17
53
0
10 May 2021
RoFormer: Enhanced Transformer with Rotary Position Embedding
RoFormer: Enhanced Transformer with Rotary Position Embedding
Jianlin Su
Yu Lu
Shengfeng Pan
Ahmed Murtadha
Bo Wen
Yunfeng Liu
36
2,159
0
20 Apr 2021
LambdaNetworks: Modeling Long-Range Interactions Without Attention
LambdaNetworks: Modeling Long-Range Interactions Without Attention
Irwan Bello
260
179
0
17 Feb 2021
Compound Word Transformer: Learning to Compose Full-Song Music over
  Dynamic Directed Hypergraphs
Compound Word Transformer: Learning to Compose Full-Song Music over Dynamic Directed Hypergraphs
Wen-Yi Hsiao
Jen-Yu Liu
Yin-Cheng Yeh
Yi-Hsuan Yang
97
180
0
07 Jan 2021
Efficient Content-Based Sparse Attention with Routing Transformers
Efficient Content-Based Sparse Attention with Routing Transformers
Aurko Roy
M. Saffar
Ashish Vaswani
David Grangier
MoE
238
579
0
12 Mar 2020
DDSP: Differentiable Digital Signal Processing
DDSP: Differentiable Digital Signal Processing
Jesse Engel
Lamtharn Hantrakul
Chenjie Gu
Adam Roberts
DiffM
83
372
0
14 Jan 2020
1