Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2405.03882
Cited By
Trio-ViT: Post-Training Quantization and Acceleration for Softmax-Free Efficient Vision Transformer
6 May 2024
Huihong Shi
Haikuo Shao
Wendong Mao
Zhongfeng Wang
ViT
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Trio-ViT: Post-Training Quantization and Acceleration for Softmax-Free Efficient Vision Transformer"
5 / 5 papers shown
Title
M
2
^2
2
-ViT: Accelerating Hybrid Vision Transformers with Two-Level Mixed Quantization
Yanbiao Liang
Huihong Shi
Zhongfeng Wang
MQ
11
0
0
10 Oct 2024
I-ViT: Integer-only Quantization for Efficient Vision Transformer Inference
Zhikai Li
Qingyi Gu
MQ
41
94
0
04 Jul 2022
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer
Sachin Mehta
Mohammad Rastegari
ViT
184
1,148
0
05 Oct 2021
I-BERT: Integer-only BERT Quantization
Sehoon Kim
A. Gholami
Z. Yao
Michael W. Mahoney
Kurt Keutzer
MQ
86
332
0
05 Jan 2021
Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT
Sheng Shen
Zhen Dong
Jiayu Ye
Linjian Ma
Z. Yao
A. Gholami
Michael W. Mahoney
Kurt Keutzer
MQ
214
505
0
12 Sep 2019
1