Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.01178
Cited By
DCT-Former: Efficient Self-Attention with Discrete Cosine Transform
2 March 2022
Carmelo Scribano
Giorgia Franchini
M. Prato
Marko Bertogna
Re-assign community
ArXiv
PDF
HTML
Papers citing
"DCT-Former: Efficient Self-Attention with Discrete Cosine Transform"
4 / 4 papers shown
Title
Learning Item Representations Directly from Multimodal Features for Effective Recommendation
Xin Zhou
Xiaoxiong Zhang
Dusit Niyato
Zhiqi Shen
51
0
0
08 May 2025
Combiner: Full Attention Transformer with Sparse Computation Cost
Hongyu Ren
H. Dai
Zihang Dai
Mengjiao Yang
J. Leskovec
Dale Schuurmans
Bo Dai
73
77
0
12 Jul 2021
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
239
2,600
0
04 May 2021
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,740
0
26 Sep 2016
1