ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.11542
  4. Cited By
MIA-Former: Efficient and Robust Vision Transformers via Multi-grained
  Input-Adaptation

MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation

AAAI Conference on Artificial Intelligence (AAAI), 2021
21 December 2021
Zhongzhi Yu
Y. Fu
Sicheng Li
Chaojian Li
Yingyan Lin
    ViT
ArXiv (abs)PDFHTML

Papers citing "MIA-Former: Efficient and Robust Vision Transformers via Multi-grained Input-Adaptation"

10 / 10 papers shown
ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient
  Vision Transformer
ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision TransformerNeural Information Processing Systems (NeurIPS), 2023
Haoran You
Huihong Shi
Yipin Guo
Yingyan Lin
Lin
563
21
0
10 Jun 2023
Hint-Aug: Drawing Hints from Foundation Vision Transformers Towards
  Boosted Few-Shot Parameter-Efficient Tuning
Hint-Aug: Drawing Hints from Foundation Vision Transformers Towards Boosted Few-Shot Parameter-Efficient TuningComputer Vision and Pattern Recognition (CVPR), 2023
Zhongzhi Yu
Shang Wu
Y. Fu
Shunyao Zhang
Yingyan Lin
352
8
0
25 Apr 2023
AdaMTL: Adaptive Input-dependent Inference for Efficient Multi-Task
  Learning
AdaMTL: Adaptive Input-dependent Inference for Efficient Multi-Task Learning
Marina Neseem
Ahmed A. Agiza
Sherief Reda
134
9
0
17 Apr 2023
Map-and-Conquer: Energy-Efficient Mapping of Dynamic Neural Nets onto
  Heterogeneous MPSoCs
Map-and-Conquer: Energy-Efficient Mapping of Dynamic Neural Nets onto Heterogeneous MPSoCsDesign Automation Conference (DAC), 2023
Halima Bouzidi
Mohanad Odema
Hamza Ouarnoughi
Smail Niar
Mohammad Abdullah Al Faruque
191
15
0
24 Feb 2023
Castling-ViT: Compressing Self-Attention via Switching Towards
  Linear-Angular Attention at Vision Transformer Inference
Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention at Vision Transformer InferenceComputer Vision and Pattern Recognition (CVPR), 2022
Haoran You
Yunyang Xiong
Xiaoliang Dai
Bichen Wu
Peizhao Zhang
Haoqi Fan
Peter Vajda
Yingyan Lin
422
48
0
18 Nov 2022
Towards Efficient Adversarial Training on Vision Transformers
Towards Efficient Adversarial Training on Vision TransformersEuropean Conference on Computer Vision (ECCV), 2022
Boxi Wu
Jindong Gu
Zhifeng Li
Deng Cai
Xiaofei He
Wei Liu
ViTAAML
253
45
0
21 Jul 2022
Are Vision Transformers Robust to Patch Perturbations?
Are Vision Transformers Robust to Patch Perturbations?European Conference on Computer Vision (ECCV), 2021
Jindong Gu
Volker Tresp
Yao Qin
AAMLViT
245
79
0
20 Nov 2021
Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks
Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks
Yonggan Fu
Qixuan Yu
Yang Zhang
Shan-Hung Wu
Ouyang Xu
David D. Cox
Yingyan Lin
AAMLOOD
467
36
0
26 Oct 2021
SDTP: Semantic-aware Decoupled Transformer Pyramid for Dense Image
  Prediction
SDTP: Semantic-aware Decoupled Transformer Pyramid for Dense Image Prediction
Zekun Li
Yufan Liu
Bing Li
Weiming Hu
Kebin Wu
Chengwei Peng
ViT
138
24
0
18 Sep 2021
2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency
2-in-1 Accelerator: Enabling Random Precision Switch for Winning Both Adversarial Robustness and Efficiency
Yonggan Fu
Yang Zhao
Qixuan Yu
Chaojian Li
Yingyan Lin
AAML
393
15
0
11 Sep 2021
1
Page 1 of 1