ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.01178
  4. Cited By
DCT-Former: Efficient Self-Attention with Discrete Cosine Transform

DCT-Former: Efficient Self-Attention with Discrete Cosine Transform

2 March 2022
Carmelo Scribano
Giorgia Franchini
M. Prato
Marko Bertogna
ArXivPDFHTML

Papers citing "DCT-Former: Efficient Self-Attention with Discrete Cosine Transform"

4 / 4 papers shown
Title
Learning Item Representations Directly from Multimodal Features for Effective Recommendation
Learning Item Representations Directly from Multimodal Features for Effective Recommendation
Xin Zhou
Xiaoxiong Zhang
Dusit Niyato
Zhiqi Shen
51
0
0
08 May 2025
Combiner: Full Attention Transformer with Sparse Computation Cost
Combiner: Full Attention Transformer with Sparse Computation Cost
Hongyu Ren
H. Dai
Zihang Dai
Mengjiao Yang
J. Leskovec
Dale Schuurmans
Bo Dai
73
77
0
12 Jul 2021
MLP-Mixer: An all-MLP Architecture for Vision
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
239
2,600
0
04 May 2021
Google's Neural Machine Translation System: Bridging the Gap between
  Human and Machine Translation
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,740
0
26 Sep 2016
1