Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.05628
Cited By
RepQuant: Towards Accurate Post-Training Quantization of Large Transformer Models via Scale Reparameterization
8 February 2024
Zhikai Li
Xuewen Liu
Jing Zhang
Qingyi Gu
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"RepQuant: Towards Accurate Post-Training Quantization of Large Transformer Models via Scale Reparameterization"
9 / 9 papers shown
Title
SAQ-SAM: Semantically-Aligned Quantization for Segment Anything Model
Jing Zhang
Z. Li
Qingyi Gu
MQ
VLM
45
0
0
09 Mar 2025
CacheQuant: Comprehensively Accelerated Diffusion Models
Xuewen Liu
Zhikai Li
Qingyi Gu
DiffM
25
0
0
03 Mar 2025
Privacy-Preserving SAM Quantization for Efficient Edge Intelligence in Healthcare
Zhikai Li
Jing Zhang
Qingyi Gu
MedIm
28
0
0
14 Sep 2024
DopQ-ViT: Towards Distribution-Friendly and Outlier-Aware Post-Training Quantization for Vision Transformers
Lianwei Yang
Haisong Gong
Qingyi Gu
MQ
21
2
0
06 Aug 2024
ADFQ-ViT: Activation-Distribution-Friendly Post-Training Quantization for Vision Transformers
Yanfeng Jiang
Ning Sun
Xueshuo Xie
Fei Yang
Tao Li
MQ
18
2
0
03 Jul 2024
Model Quantization and Hardware Acceleration for Vision Transformers: A Comprehensive Survey
Dayou Du
Gu Gong
Xiaowen Chu
MQ
26
5
0
01 May 2024
Towards Accurate Post-Training Quantization for Vision Transformer
Yifu Ding
Haotong Qin
Qing-Yu Yan
Z. Chai
Junjie Liu
Xiaolin K. Wei
Xianglong Liu
MQ
47
66
0
25 Mar 2023
PSAQ-ViT V2: Towards Accurate and General Data-Free Quantization for Vision Transformers
Zhikai Li
Mengjuan Chen
Junrui Xiao
Qingyi Gu
ViT
MQ
37
31
0
13 Sep 2022
I-ViT: Integer-only Quantization for Efficient Vision Transformer Inference
Zhikai Li
Qingyi Gu
MQ
41
94
0
04 Jul 2022
1