ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.05740
  4. Cited By
QDrop: Randomly Dropping Quantization for Extremely Low-bit
  Post-Training Quantization
v1v2 (latest)

QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quantization

International Conference on Learning Representations (ICLR), 2022
11 March 2022
Xiuying Wei
Yazhe Niu
Yuhang Li
Xianglong Liu
F. Yu
    MQVLM
ArXiv (abs)PDFHTMLGithub (122★)

Papers citing "QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quantization"

24 / 124 papers shown
Augmenting Hessians with Inter-Layer Dependencies for Mixed-Precision
  Post-Training Quantization
Augmenting Hessians with Inter-Layer Dependencies for Mixed-Precision Post-Training Quantization
Clemens J. S. Schaefer
Navid Lambert-Shirzad
Xiaofan Zhang
Chia-Wei Chou
T. Jablin
Jian Li
Elfie Guo
Caitlin Stanton
S. Joshi
Yu Emma Wang
MQ
223
4
0
08 Jun 2023
Temporal Dynamic Quantization for Diffusion Models
Temporal Dynamic Quantization for Diffusion ModelsNeural Information Processing Systems (NeurIPS), 2023
Junhyuk So
Jungwon Lee
Daehyun Ahn
Hyungjun Kim
Eunhyeok Park
DiffMMQ
329
82
0
04 Jun 2023
OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and
  Inference of Large Language Models
OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language ModelsAAAI Conference on Artificial Intelligence (AAAI), 2023
Changhun Lee
Jungyu Jin
Taesu Kim
Hyungjun Kim
Eunhyeok Park
MQ
400
98
0
04 Jun 2023
FlexRound: Learnable Rounding based on Element-wise Division for
  Post-Training Quantization
FlexRound: Learnable Rounding based on Element-wise Division for Post-Training QuantizationInternational Conference on Machine Learning (ICML), 2023
J. H. Lee
Jeonghoon Kim
S. Kwon
Dongsoo Lee
MQ
412
50
0
01 Jun 2023
Towards Accurate Post-training Quantization for Diffusion Models
Towards Accurate Post-training Quantization for Diffusion ModelsComputer Vision and Pattern Recognition (CVPR), 2023
Changyuan Wang
Ziwei Wang
Xiuwei Xu
Yansong Tang
Jie Zhou
Jiwen Lu
MQ
306
34
0
30 May 2023
PTQD: Accurate Post-Training Quantization for Diffusion Models
PTQD: Accurate Post-Training Quantization for Diffusion ModelsNeural Information Processing Systems (NeurIPS), 2023
Yefei He
Luping Liu
Jing Liu
Weijia Wu
Hong Zhou
Bohan Zhuang
DiffMMQ
466
159
0
18 May 2023
MBQuant: A Novel Multi-Branch Topology Method for Arbitrary Bit-width
  Network Quantization
MBQuant: A Novel Multi-Branch Topology Method for Arbitrary Bit-width Network QuantizationPattern Recognition (Pattern Recogn.), 2023
Mingliang Xu
Yuyao Zhou
Jiayi Ji
Rongrong Ji
MQ
223
7
0
14 May 2023
Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by
  Model Quantization
Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model QuantizationIEEE Transactions on Information Forensics and Security (IEEE TIFS), 2023
Yulong Yang
Chenhao Lin
Qian Li
Subrat Kishore Dutta
Haoran Fan
Dawei Zhou
Nannan Wang
Tongliang Liu
Chao Shen
AAMLMQ
342
20
0
10 May 2023
Improving Post-Training Quantization on Object Detection with Task
  Loss-Guided Lp Metric
Improving Post-Training Quantization on Object Detection with Task Loss-Guided Lp Metric
Lin Niu
Jia-Wen Liu
Zhihang Yuan
Dawei Yang
Xinggang Wang
Wenyu Liu
MQ
284
3
0
19 Apr 2023
Outlier Suppression+: Accurate quantization of large language models by
  equivalent and optimal shifting and scaling
Outlier Suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scalingConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Xiuying Wei
Yunchen Zhang
Yuhang Li
Xiangguo Zhang
Yazhe Niu
Jian Ren
Zhengang Li
MQ
278
58
0
18 Apr 2023
Benchmarking the Reliability of Post-training Quantization: a Particular
  Focus on Worst-case Performance
Benchmarking the Reliability of Post-training Quantization: a Particular Focus on Worst-case Performance
Zhihang Yuan
Jiawei Liu
Jiaxiang Wu
Dawei Yang
Qiang Wu
Guangyu Sun
Wenyu Liu
Xinggang Wang
Bingzhe Wu
MQ
186
12
0
23 Mar 2023
Q-HyViT: Post-Training Quantization of Hybrid Vision Transformers with
  Bridge Block Reconstruction for IoT Systems
Q-HyViT: Post-Training Quantization of Hybrid Vision Transformers with Bridge Block Reconstruction for IoT SystemsIEEE Internet of Things Journal (IEEE IoT J.), 2023
Jemin Lee
Yongin Kwon
Sihyeong Park
Misun Yu
Jeman Park
Hwanjun Song
ViTMQ
237
12
0
22 Mar 2023
Solving Oscillation Problem in Post-Training Quantization Through a
  Theoretical Perspective
Solving Oscillation Problem in Post-Training Quantization Through a Theoretical PerspectiveComputer Vision and Pattern Recognition (CVPR), 2023
Yuexiao Ma
Huixia Li
Xiawu Zheng
Xuefeng Xiao
Rui Wang
Shilei Wen
Xin Pan
Jiayi Ji
Rongrong Ji
MQ
234
15
0
21 Mar 2023
Redistribution of Weights and Activations for AdderNet Quantization
Redistribution of Weights and Activations for AdderNet QuantizationNeural Information Processing Systems (NeurIPS), 2022
Ying Nie
Kai Han
Haikang Diao
Chuanjian Liu
Enhua Wu
Yunhe Wang
MQ
252
11
0
20 Dec 2022
RepQ-ViT: Scale Reparameterization for Post-Training Quantization of
  Vision Transformers
RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision TransformersIEEE International Conference on Computer Vision (ICCV), 2022
Zhikai Li
Junrui Xiao
Lianwei Yang
Qingyi Gu
MQ
326
132
0
16 Dec 2022
PD-Quant: Post-Training Quantization based on Prediction Difference
  Metric
PD-Quant: Post-Training Quantization based on Prediction Difference MetricComputer Vision and Pattern Recognition (CVPR), 2022
Jiawei Liu
Lin Niu
Zhihang Yuan
Dawei Yang
Xinggang Wang
Wenyu Liu
MQ
504
96
0
14 Dec 2022
Genie: Show Me the Data for Quantization
Genie: Show Me the Data for QuantizationComputer Vision and Pattern Recognition (CVPR), 2022
Yongkweon Jeon
Chungman Lee
Ho-Young Kim
MQ
363
20
0
09 Dec 2022
Outlier Suppression: Pushing the Limit of Low-bit Transformer Language
  Models
Outlier Suppression: Pushing the Limit of Low-bit Transformer Language ModelsNeural Information Processing Systems (NeurIPS), 2022
Xiuying Wei
Yunchen Zhang
Xiangguo Zhang
Yazhe Niu
Shanghang Zhang
Tao Gui
F. Yu
Xianglong Liu
MQ
378
194
0
27 Sep 2022
CWP: Instance complexity weighted channel-wise soft masks for network
  pruning
CWP: Instance complexity weighted channel-wise soft masks for network pruning
Jiapeng Wang
Ming Ma
Zhenhua Yu
230
0
0
08 Sep 2022
Efficient Adaptive Activation Rounding for Post-Training Quantization
Efficient Adaptive Activation Rounding for Post-Training Quantization
Zhengyi Li
Cong Guo
Zhanda Zhu
Yangjie Zhou
Yuxian Qiu
Xiaotian Gao
Jingwen Leng
Minyi Guo
MQ
230
5
0
25 Aug 2022
Symmetry Regularization and Saturating Nonlinearity for Robust
  Quantization
Symmetry Regularization and Saturating Nonlinearity for Robust QuantizationEuropean Conference on Computer Vision (ECCV), 2022
Sein Park
Yeongsang Jang
Eunhyeok Park
MQ
140
6
0
31 Jul 2022
Evaluating the Practicality of Learned Image Compression
Evaluating the Practicality of Learned Image Compression
Hongjiu Yu
Qiancheng Sun
J. Hu
Xin Xue
Jixiang Luo
...
Pengbo Wang
Yuanyuan Wang
Yaxu Dai
Yan Wang
Hongwei Qin
138
3
0
29 Jul 2022
What Do Compressed Multilingual Machine Translation Models Forget?
What Do Compressed Multilingual Machine Translation Models Forget?Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Alireza Mohammadshahi
Vassilina Nikoulina
Alexandre Berard
Caroline Brun
James Henderson
Laurent Besacier
AI4CE
400
12
0
22 May 2022
Towards Efficient Post-training Quantization of Pre-trained Language
  Models
Towards Efficient Post-training Quantization of Pre-trained Language Models
Haoli Bai
Lu Hou
Lifeng Shang
Xin Jiang
Irwin King
Michael R. Lyu
MQ
206
51
0
30 Sep 2021
Previous
123