ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.12083
  4. Cited By
DynaMixer: A Vision MLP Architecture with Dynamic Mixing

DynaMixer: A Vision MLP Architecture with Dynamic Mixing

28 January 2022
Ziyu Wang
Wenhao Jiang
Yiming Zhu
Li Yuan
Yibing Song
Wei Liu
ArXivPDFHTML

Papers citing "DynaMixer: A Vision MLP Architecture with Dynamic Mixing"

31 / 31 papers shown
Title
Deep Fourier-embedded Network for RGB and Thermal Salient Object Detection
Deep Fourier-embedded Network for RGB and Thermal Salient Object Detection
Pengfei Lyu
Xiaosheng Yu
Chengdong Wu
Jagath C. Rajapakse
Jagath C. Rajapakse
71
0
0
27 Nov 2024
Improving 3D Medical Image Segmentation at Boundary Regions using Local
  Self-attention and Global Volume Mixing
Improving 3D Medical Image Segmentation at Boundary Regions using Local Self-attention and Global Volume Mixing
Daniya Najiha Abdul Kareem
M. Fiaz
Noa Novershtern
Jacob Hanna
Hisham Cholakkal
31
3
0
20 Oct 2024
MambaMixer: Efficient Selective State Space Models with Dual Token and
  Channel Selection
MambaMixer: Efficient Selective State Space Models with Dual Token and Channel Selection
Ali Behrouz
Michele Santacatterina
Ramin Zabih
39
31
0
29 Mar 2024
Heracles: A Hybrid SSM-Transformer Model for High-Resolution Image and
  Time-Series Analysis
Heracles: A Hybrid SSM-Transformer Model for High-Resolution Image and Time-Series Analysis
Badri N. Patro
Suhas Ranganath
Vinay P. Namboodiri
Vijay Srinivas Agneeswaran
43
2
0
26 Mar 2024
SiMBA: Simplified Mamba-Based Architecture for Vision and Multivariate
  Time series
SiMBA: Simplified Mamba-Based Architecture for Vision and Multivariate Time series
Badri N. Patro
Vijay Srinivas Agneeswaran
Mamba
51
50
0
22 Mar 2024
Dimension Mixer: A Generalized Method for Structured Sparsity in Deep
  Neural Networks
Dimension Mixer: A Generalized Method for Structured Sparsity in Deep Neural Networks
Suman Sapkota
Binod Bhattarai
29
0
0
30 Nov 2023
Scattering Vision Transformer: Spectral Mixing Matters
Scattering Vision Transformer: Spectral Mixing Matters
Badri N. Patro
Vijay Srinivas Agneeswaran
24
14
0
02 Nov 2023
Strip-MLP: Efficient Token Interaction for Vision MLP
Strip-MLP: Efficient Token Interaction for Vision MLP
Guiping Cao
Shengda Luo
Wen-Fong Huang
X. Lan
D. Jiang
Yaowei Wang
Jianguo Zhang
28
10
0
21 Jul 2023
Long-range Meta-path Search on Large-scale Heterogeneous Graphs
Long-range Meta-path Search on Large-scale Heterogeneous Graphs
Chao Li
Zijie Guo
Qiuting He
Hao Xu
Kun He
16
2
0
17 Jul 2023
Efficient Deep Spiking Multi-Layer Perceptrons with Multiplication-Free
  Inference
Efficient Deep Spiking Multi-Layer Perceptrons with Multiplication-Free Inference
Boyan Li
Luziwei Leng
Shuaijie Shen
Kaixuan Zhang
Jianguo Zhang
Jianxing Liao
Ran Cheng
19
7
0
21 Jun 2023
CAT-Walk: Inductive Hypergraph Learning via Set Walks
CAT-Walk: Inductive Hypergraph Learning via Set Walks
Ali Behrouz
Farnoosh Hashemi
Sadaf Sadeghian
Margo Seltzer
29
16
0
19 Jun 2023
HyperConformer: Multi-head HyperMixer for Efficient Speech Recognition
HyperConformer: Multi-head HyperMixer for Efficient Speech Recognition
Florian Mai
Juan Pablo Zuluaga
Titouan Parcollet
P. Motlícek
21
10
0
29 May 2023
Caterpillar: A Pure-MLP Architecture with Shifted-Pillars-Concatenation
Caterpillar: A Pure-MLP Architecture with Shifted-Pillars-Concatenation
J. Sun
Xiaoshuang Shi
Zhiyuan Weng
Kaidi Xu
H. Shen
Xiao-lan Zhu
MLLM
22
2
0
28 May 2023
TriMLP: Revenge of a MLP-like Architecture in Sequential Recommendation
TriMLP: Revenge of a MLP-like Architecture in Sequential Recommendation
Yiheng Jiang
Yuanbo Xu
Yongjian Yang
Funing Yang
Pengyang Wang
Hui Xiong
19
2
0
24 May 2023
SpectFormer: Frequency and Attention is what you need in a Vision
  Transformer
SpectFormer: Frequency and Attention is what you need in a Vision Transformer
Badri N. Patro
Vinay P. Namboodiri
Vijay Srinivas Agneeswaran
ViT
16
47
0
13 Apr 2023
FFT-based Dynamic Token Mixer for Vision
FFT-based Dynamic Token Mixer for Vision
Yuki Tatsunami
Masato Taki
43
18
0
07 Mar 2023
Efficiency 360: Efficient Vision Transformers
Efficiency 360: Efficient Vision Transformers
Badri N. Patro
Vijay Srinivas Agneeswaran
19
6
0
16 Feb 2023
A Theoretical Understanding of Shallow Vision Transformers: Learning,
  Generalization, and Sample Complexity
A Theoretical Understanding of Shallow Vision Transformers: Learning, Generalization, and Sample Complexity
Hongkang Li
M. Wang
Sijia Liu
Pin-Yu Chen
ViT
MLT
29
56
0
12 Feb 2023
A Generalization of ViT/MLP-Mixer to Graphs
A Generalization of ViT/MLP-Mixer to Graphs
Xiaoxin He
Bryan Hooi
T. Laurent
Adam Perold
Yann LeCun
Xavier Bresson
22
88
0
27 Dec 2022
Towards Efficient Adversarial Training on Vision Transformers
Towards Efficient Adversarial Training on Vision Transformers
Boxi Wu
Jindong Gu
Zhifeng Li
Deng Cai
Xiaofei He
Wei Liu
ViT
AAML
23
37
0
21 Jul 2022
Transforming medical imaging with Transformers? A comparative review of
  key properties, current progresses, and future perspectives
Transforming medical imaging with Transformers? A comparative review of key properties, current progresses, and future perspectives
Jun Li
Junyu Chen
Yucheng Tang
Ce Wang
Bennett A. Landman
S. K. Zhou
ViT
OOD
MedIm
16
19
0
02 Jun 2022
HyperMixer: An MLP-based Low Cost Alternative to Transformers
HyperMixer: An MLP-based Low Cost Alternative to Transformers
Florian Mai
Arnaud Pannatier
Fabio Fehr
Haolin Chen
François Marelli
F. Fleuret
James Henderson
17
11
0
07 Mar 2022
Not All Patches are What You Need: Expediting Vision Transformers via
  Token Reorganizations
Not All Patches are What You Need: Expediting Vision Transformers via Token Reorganizations
Youwei Liang
Chongjian Ge
Zhan Tong
Yibing Song
Jue Wang
P. Xie
ViT
4
233
0
16 Feb 2022
Are we ready for a new paradigm shift? A Survey on Visual Deep MLP
Are we ready for a new paradigm shift? A Survey on Visual Deep MLP
Ruiyang Liu
Yinghui Li
Li Tao
Dun Liang
Haitao Zheng
77
96
0
07 Nov 2021
ConvMLP: Hierarchical Convolutional MLPs for Vision
ConvMLP: Hierarchical Convolutional MLPs for Vision
Jiachen Li
Ali Hassani
Steven Walton
Humphrey Shi
33
55
0
09 Sep 2021
MLP-Mixer: An all-MLP Architecture for Vision
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
239
2,592
0
04 May 2021
Transformer in Transformer
Transformer in Transformer
Kai Han
An Xiao
Enhua Wu
Jianyuan Guo
Chunjing Xu
Yunhe Wang
ViT
282
1,518
0
27 Feb 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
  without Convolutions
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
263
3,604
0
24 Feb 2021
Bottleneck Transformers for Visual Recognition
Bottleneck Transformers for Visual Recognition
A. Srinivas
Tsung-Yi Lin
Niki Parmar
Jonathon Shlens
Pieter Abbeel
Ashish Vaswani
SLR
270
973
0
27 Jan 2021
Transformers in Vision: A Survey
Transformers in Vision: A Survey
Salman Khan
Muzammal Naseer
Munawar Hayat
Syed Waqas Zamir
F. Khan
M. Shah
ViT
225
2,427
0
04 Jan 2021
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
261
10,196
0
16 Nov 2016
1