Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2407.11306
Cited By
PADRe: A Unifying Polynomial Attention Drop-in Replacement for Efficient Vision Transformer
16 July 2024
Pierre-David Létourneau
Manish Kumar Singh
Hsin-Pai Cheng
Shizhong Han
Yunxiao Shi
Dalton Jones
M. H. Langston
Hong Cai
Fatih Porikli
Re-assign community
ArXiv
PDF
HTML
Papers citing
"PADRe: A Unifying Polynomial Attention Drop-in Replacement for Efficient Vision Transformer"
4 / 4 papers shown
Title
Run, Don't Walk: Chasing Higher FLOPS for Faster Neural Networks
Jierun Chen
Shiu-hong Kao
Hao He
Weipeng Zhuo
Song Wen
Chul-Ho Lee
Shueng-Han Gary Chan
OOD
27
679
0
07 Mar 2023
Hydra Attention: Efficient Attention with Many Heads
Daniel Bolya
Cheng-Yang Fu
Xiaoliang Dai
Peizhao Zhang
Judy Hoffman
99
75
0
15 Sep 2022
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
239
2,554
0
04 May 2021
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
279
39,083
0
01 Sep 2014
1