Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2105.03824
Cited By
FNet: Mixing Tokens with Fourier Transforms
9 May 2021
James Lee-Thorp
Joshua Ainslie
Ilya Eckstein
Santiago Ontanon
Re-assign community
ArXiv
PDF
HTML
Papers citing
"FNet: Mixing Tokens with Fourier Transforms"
50 / 251 papers shown
Title
Aspect Based Sentiment Analysis Using Spectral Temporal Graph Neural Network
Abir Chakraborty
17
5
0
14 Feb 2022
ETSformer: Exponential Smoothing Transformers for Time-series Forecasting
Gerald Woo
Chenghao Liu
Doyen Sahoo
Akshat Kumar
Guosheng Lin
AI4TS
31
162
0
03 Feb 2022
FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting
Tian Zhou
Ziqing Ma
Qingsong Wen
Xue Wang
Liang Sun
Rong Jin
AI4TS
30
1,307
0
30 Jan 2022
Convolutional Xformers for Vision
Pranav Jeevan
Amit Sethi
ViT
55
12
0
25 Jan 2022
Video Transformers: A Survey
Javier Selva
A. S. Johansen
Sergio Escalera
Kamal Nasrollahi
T. Moeslund
Albert Clapés
ViT
22
103
0
16 Jan 2022
ConvMixer: Feature Interactive Convolution with Curriculum Learning for Small Footprint and Noisy Far-field Keyword Spotting
Dianwen Ng
Yunqi Chen
Biao Tian
Qiang Fu
Chng Eng Siong
24
46
0
15 Jan 2022
MAXIM: Multi-Axis MLP for Image Processing
Zhengzhong Tu
Hossein Talebi
Han Zhang
Feng Yang
P. Milanfar
A. Bovik
Yinxiao Li
39
463
0
09 Jan 2022
Block Walsh-Hadamard Transform Based Binary Layers in Deep Neural Networks
Hongyi Pan
Diaa Badawi
Ahmet Enis Cetin
12
19
0
07 Jan 2022
A Novel Deep Parallel Time-series Relation Network for Fault Diagnosis
Chun Yang
AI4TS
AI4CE
30
4
0
03 Dec 2021
Factorized Fourier Neural Operators
Alasdair Tran
A. Mathews
Lexing Xie
Cheng Soon Ong
AI4CE
34
142
0
27 Nov 2021
Adaptive Fourier Neural Operators: Efficient Token Mixers for Transformers
John Guibas
Morteza Mardani
Zong-Yi Li
Andrew Tao
Anima Anandkumar
Bryan Catanzaro
21
230
0
24 Nov 2021
SimpleTRON: Simple Transformer with O(N) Complexity
Uladzislau Yorsh
Alexander Kovalenko
Vojtvech Vanvcura
Daniel Vavsata
Pavel Kordík
Tomávs Mikolov
30
1
0
23 Nov 2021
MetaFormer Is Actually What You Need for Vision
Weihao Yu
Mi Luo
Pan Zhou
Chenyang Si
Yichen Zhou
Xinchao Wang
Jiashi Feng
Shuicheng Yan
31
874
0
22 Nov 2021
Spectral Transform Forms Scalable Transformer
Bingxin Zhou
Xinliang Liu
Yuehua Liu
Yunyin Huang
Pietro Lió
Yuguang Wang
52
6
0
15 Nov 2021
The Pseudo Projection Operator: Applications of Deep Learning to Projection Based Filtering in Non-Trivial Frequency Regimes
Matthew L. Weiss
Nathan C. Frey
S. Samsi
Randy C. Paffenroth
V. Gadepally
6
0
0
13 Nov 2021
Blending Anti-Aliasing into Vision Transformer
Shengju Qian
Hao Shao
Yi Zhu
Mu Li
Jiaya Jia
26
20
0
28 Oct 2021
Periodic Activation Functions Induce Stationarity
Lassi Meronen
Martin Trapp
Arno Solin
BDL
17
20
0
26 Oct 2021
The Efficiency Misnomer
Daoyuan Chen
Liuyi Yao
Dawei Gao
Ashish Vaswani
Yaliang Li
34
99
0
25 Oct 2021
FDGATII : Fast Dynamic Graph Attention with Initial Residual and Identity Mapping
Gayan K. Kulatilleke
Marius Portmann
Ryan K. L. Ko
Shekhar S. Chandra
22
9
0
21 Oct 2021
Inductive Biases and Variable Creation in Self-Attention Mechanisms
Benjamin L. Edelman
Surbhi Goel
Sham Kakade
Cyril Zhang
27
116
0
19 Oct 2021
An Empirical Study: Extensive Deep Temporal Point Process
Haitao Lin
Cheng Tan
Lirong Wu
Zhangyang Gao
Stan. Z. Li
AI4TS
13
12
0
19 Oct 2021
Use of Deterministic Transforms to Design Weight Matrices of a Neural Network
Pol Grau Jurado
Xinyue Liang
Alireza M. Javid
S. Chatterjee
24
0
0
06 Oct 2021
Efficient and Private Federated Learning with Partially Trainable Networks
Hakim Sidahmed
Zheng Xu
Ankush Garg
Yuan Cao
Mingqing Chen
FedML
49
13
0
06 Oct 2021
PoNet: Pooling Network for Efficient Token Mixing in Long Sequences
Chao-Hong Tan
Qian Chen
Wen Wang
Qinglin Zhang
Siqi Zheng
Zhenhua Ling
ViT
22
11
0
06 Oct 2021
Redesigning the Transformer Architecture with Insights from Multi-particle Dynamical Systems
Subhabrata Dutta
Tanya Gautam
Soumen Chakrabarti
Tanmoy Chakraborty
51
15
0
30 Sep 2021
Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Yi Tay
Mostafa Dehghani
J. Rao
W. Fedus
Samira Abnar
Hyung Won Chung
Sharan Narang
Dani Yogatama
Ashish Vaswani
Donald Metzler
206
110
0
22 Sep 2021
Resolution-robust Large Mask Inpainting with Fourier Convolutions
Roman Suvorov
Elizaveta Logacheva
Anton Mashikhin
Anastasia Remizova
Arsenii Ashukha
Aleksei Silvestrov
Naejin Kong
Harshith Goka
Kiwoong Park
Victor Lempitsky
30
822
0
15 Sep 2021
Oscillatory Fourier Neural Network: A Compact and Efficient Architecture for Sequential Processing
Bing Han
Cheng Wang
Kaushik Roy
34
7
0
14 Sep 2021
Multitask Balanced and Recalibrated Network for Medical Code Prediction
Wei Sun
Shaoxiong Ji
Min Zhang
Pekka Marttinen
17
15
0
06 Sep 2021
SANSformers: Self-Supervised Forecasting in Electronic Health Records with Attention-Free Models
Yogesh Kumar
Alexander Ilin
H. Salo
S. Kulathinal
M. Leinonen
Pekka Marttinen
AI4TS
MedIm
28
0
0
31 Aug 2021
Shatter: An Efficient Transformer Encoder with Single-Headed Self-Attention and Relative Sequence Partitioning
Ran Tian
Joshua Maynez
Ankur P. Parikh
ViT
29
2
0
30 Aug 2021
FMMformer: Efficient and Flexible Transformer via Decomposed Near-field and Far-field Attention
T. Nguyen
Vai Suliafu
Stanley J. Osher
Long Chen
Bao Wang
29
35
0
05 Aug 2021
Large-Scale Differentially Private BERT
Rohan Anil
Badih Ghazi
Vineet Gupta
Ravi Kumar
Pasin Manurangsi
36
131
0
03 Aug 2021
On Integral Theorems and their Statistical Properties
Nhat Ho
S. Walker
6
0
0
22 Jul 2021
FNetAR: Mixing Tokens with Autoregressive Fourier Transforms
Tim Lou
M. Park
M. Ramezanali
Vincent Tang
AI4TS
11
2
0
22 Jul 2021
Vision Xformers: Efficient Attention for Image Classification
Pranav Jeevan
Amit Sethi
ViT
25
13
0
05 Jul 2021
Global Filter Networks for Image Classification
Yongming Rao
Wenliang Zhao
Zheng Zhu
Jiwen Lu
Jie Zhou
ViT
28
451
0
01 Jul 2021
Multi-Exit Vision Transformer for Dynamic Inference
Arian Bakhtiarnia
Qi Zhang
Alexandros Iosifidis
36
26
0
29 Jun 2021
XCiT: Cross-Covariance Image Transformers
Alaaeldin El-Nouby
Hugo Touvron
Mathilde Caron
Piotr Bojanowski
Matthijs Douze
...
Ivan Laptev
Natalia Neverova
Gabriel Synnaeve
Jakob Verbeek
Hervé Jégou
ViT
42
499
0
17 Jun 2021
PairConnect: A Compute-Efficient MLP Alternative to Attention
Zhaozhuo Xu
Minghao Yan
Junyan Zhang
Anshumali Shrivastava
44
1
0
15 Jun 2021
WAX-ML: A Python library for machine learning and feedback loops on streaming data
Emmanuel Sérié
KELM
AI4CE
20
0
0
11 Jun 2021
GroupBERT: Enhanced Transformer Architecture with Efficient Grouped Structures
Ivan Chelombiev
Daniel Justus
Douglas Orr
A. Dietrich
Frithjof Gressmann
A. Koliousis
Carlo Luschi
24
5
0
10 Jun 2021
Choose a Transformer: Fourier or Galerkin
Shuhao Cao
42
225
0
31 May 2021
Dispatcher: A Message-Passing Approach To Language Modelling
A. Cetoli
40
0
0
09 May 2021
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
277
2,606
0
04 May 2021
Fourier Image Transformer
T. Buchholz
Florian Jug
ViT
19
17
0
06 Apr 2021
Fourier Neural Operator for Parametric Partial Differential Equations
Zong-Yi Li
Nikola B. Kovachki
Kamyar Azizzadenesheli
Burigede Liu
K. Bhattacharya
Andrew M. Stuart
Anima Anandkumar
AI4CE
238
2,298
0
18 Oct 2020
Efficient Transformers: A Survey
Yi Tay
Mostafa Dehghani
Dara Bahri
Donald Metzler
VLM
109
1,102
0
14 Sep 2020
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
285
2,017
0
28 Jul 2020
Efficient Content-Based Sparse Attention with Routing Transformers
Aurko Roy
M. Saffar
Ashish Vaswani
David Grangier
MoE
252
580
0
12 Mar 2020
Previous
1
2
3
4
5
6
Next