ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.05628
  4. Cited By
RepQuant: Towards Accurate Post-Training Quantization of Large
  Transformer Models via Scale Reparameterization

RepQuant: Towards Accurate Post-Training Quantization of Large Transformer Models via Scale Reparameterization

8 February 2024
Zhikai Li
Xuewen Liu
Jing Zhang
Qingyi Gu
    MQ
ArXivPDFHTML

Papers citing "RepQuant: Towards Accurate Post-Training Quantization of Large Transformer Models via Scale Reparameterization"

9 / 9 papers shown
Title
SAQ-SAM: Semantically-Aligned Quantization for Segment Anything Model
Jing Zhang
Z. Li
Qingyi Gu
MQ
VLM
43
0
0
09 Mar 2025
CacheQuant: Comprehensively Accelerated Diffusion Models
Xuewen Liu
Zhikai Li
Qingyi Gu
DiffM
25
0
0
03 Mar 2025
Privacy-Preserving SAM Quantization for Efficient Edge Intelligence in
  Healthcare
Privacy-Preserving SAM Quantization for Efficient Edge Intelligence in Healthcare
Zhikai Li
Jing Zhang
Qingyi Gu
MedIm
28
0
0
14 Sep 2024
DopQ-ViT: Towards Distribution-Friendly and Outlier-Aware Post-Training
  Quantization for Vision Transformers
DopQ-ViT: Towards Distribution-Friendly and Outlier-Aware Post-Training Quantization for Vision Transformers
Lianwei Yang
Haisong Gong
Qingyi Gu
MQ
19
2
0
06 Aug 2024
ADFQ-ViT: Activation-Distribution-Friendly Post-Training Quantization
  for Vision Transformers
ADFQ-ViT: Activation-Distribution-Friendly Post-Training Quantization for Vision Transformers
Yanfeng Jiang
Ning Sun
Xueshuo Xie
Fei Yang
Tao Li
MQ
18
2
0
03 Jul 2024
Model Quantization and Hardware Acceleration for Vision Transformers: A
  Comprehensive Survey
Model Quantization and Hardware Acceleration for Vision Transformers: A Comprehensive Survey
Dayou Du
Gu Gong
Xiaowen Chu
MQ
24
5
0
01 May 2024
Towards Accurate Post-Training Quantization for Vision Transformer
Towards Accurate Post-Training Quantization for Vision Transformer
Yifu Ding
Haotong Qin
Qing-Yu Yan
Z. Chai
Junjie Liu
Xiaolin K. Wei
Xianglong Liu
MQ
44
66
0
25 Mar 2023
PSAQ-ViT V2: Towards Accurate and General Data-Free Quantization for
  Vision Transformers
PSAQ-ViT V2: Towards Accurate and General Data-Free Quantization for Vision Transformers
Zhikai Li
Mengjuan Chen
Junrui Xiao
Qingyi Gu
ViT
MQ
37
31
0
13 Sep 2022
I-ViT: Integer-only Quantization for Efficient Vision Transformer
  Inference
I-ViT: Integer-only Quantization for Efficient Vision Transformer Inference
Zhikai Li
Qingyi Gu
MQ
38
94
0
04 Jul 2022
1