Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2008.09916
Cited By
One Weight Bitwidth to Rule Them All
22 August 2020
Ting-Wu Chin
P. Chuang
Vikas Chandra
Diana Marculescu
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"One Weight Bitwidth to Rule Them All"
8 / 8 papers shown
Title
Self-Distilled Quantization: Achieving High Compression Rates in Transformer-Based Language Models
James OÑeill
Sourav Dutta
VLM
MQ
32
1
0
12 Jul 2023
CSMPQ:Class Separability Based Mixed-Precision Quantization
Ming-Yu Wang
Taisong Jin
Miaohui Zhang
Zhengtao Yu
MQ
23
0
0
20 Dec 2022
PSAQ-ViT V2: Towards Accurate and General Data-Free Quantization for Vision Transformers
Zhikai Li
Mengjuan Chen
Junrui Xiao
Qingyi Gu
ViT
MQ
43
33
0
13 Sep 2022
Mixed-Precision Neural Networks: A Survey
M. Rakka
M. Fouda
Pramod P. Khargonekar
Fadi J. Kurdahi
MQ
18
11
0
11 Aug 2022
A White Paper on Neural Network Quantization
Markus Nagel
Marios Fournarakis
Rana Ali Amjad
Yelysei Bondarenko
M. V. Baalen
Tijmen Blankevoort
MQ
19
503
0
15 Jun 2021
Machine Learning Systems in the IoT: Trustworthiness Trade-offs for Edge Intelligence
Wiebke Toussaint
Aaron Yi Ding
15
11
0
01 Dec 2020
HAWQ-V2: Hessian Aware trace-Weighted Quantization of Neural Networks
Zhen Dong
Z. Yao
Yaohui Cai
Daiyaan Arfeen
A. Gholami
Michael W. Mahoney
Kurt Keutzer
MQ
26
274
0
10 Nov 2019
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
313
1,047
0
10 Feb 2017
1