ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.03696
  4. Cited By
HAWQ: Hessian AWare Quantization of Neural Networks with Mixed-Precision

HAWQ: Hessian AWare Quantization of Neural Networks with Mixed-Precision

29 April 2019
Zhen Dong
Z. Yao
A. Gholami
Michael W. Mahoney
Kurt Keutzer
    MQ
ArXivPDFHTML

Papers citing "HAWQ: Hessian AWare Quantization of Neural Networks with Mixed-Precision"

47 / 97 papers shown
Title
FP8 Quantization: The Power of the Exponent
FP8 Quantization: The Power of the Exponent
Andrey Kuzmin
M. V. Baalen
Yuwei Ren
Markus Nagel
Jorn W. T. Peters
Tijmen Blankevoort
MQ
25
78
0
19 Aug 2022
Mixed-Precision Neural Networks: A Survey
Mixed-Precision Neural Networks: A Survey
M. Rakka
M. Fouda
Pramod P. Khargonekar
Fadi J. Kurdahi
MQ
21
11
0
11 Aug 2022
Symmetry Regularization and Saturating Nonlinearity for Robust
  Quantization
Symmetry Regularization and Saturating Nonlinearity for Robust Quantization
Sein Park
Yeongsang Jang
Eunhyeok Park
MQ
14
1
0
31 Jul 2022
CADyQ: Content-Aware Dynamic Quantization for Image Super-Resolution
CADyQ: Content-Aware Dynamic Quantization for Image Super-Resolution
Chee Hong
Sungyong Baik
Heewon Kim
Seungjun Nah
Kyoung Mu Lee
SupR
MQ
25
32
0
21 Jul 2022
POET: Training Neural Networks on Tiny Devices with Integrated
  Rematerialization and Paging
POET: Training Neural Networks on Tiny Devices with Integrated Rematerialization and Paging
Shishir G. Patil
Paras Jain
P. Dutta
Ion Stoica
Joseph E. Gonzalez
12
35
0
15 Jul 2022
I-ViT: Integer-only Quantization for Efficient Vision Transformer
  Inference
I-ViT: Integer-only Quantization for Efficient Vision Transformer Inference
Zhikai Li
Qingyi Gu
MQ
51
95
0
04 Jul 2022
Fast Lossless Neural Compression with Integer-Only Discrete Flows
Fast Lossless Neural Compression with Integer-Only Discrete Flows
Siyu Wang
Jianfei Chen
Chongxuan Li
Jun Zhu
Bo Zhang
MQ
19
7
0
17 Jun 2022
ZeroQuant: Efficient and Affordable Post-Training Quantization for
  Large-Scale Transformers
ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers
Z. Yao
Reza Yazdani Aminabadi
Minjia Zhang
Xiaoxia Wu
Conglong Li
Yuxiong He
VLM
MQ
45
441
0
04 Jun 2022
Quantization in Layer's Input is Matter
Quantization in Layer's Input is Matter
Daning Cheng
Wenguang Chen
MQ
11
0
0
10 Feb 2022
Evaluating natural language processing models with generalization
  metrics that do not need access to any training or testing data
Evaluating natural language processing models with generalization metrics that do not need access to any training or testing data
Yaoqing Yang
Ryan Theisen
Liam Hodgkinson
Joseph E. Gonzalez
Kannan Ramchandran
Charles H. Martin
Michael W. Mahoney
86
17
0
06 Feb 2022
When Do Flat Minima Optimizers Work?
When Do Flat Minima Optimizers Work?
Jean Kaddour
Linqing Liu
Ricardo M. A. Silva
Matt J. Kusner
ODL
16
58
0
01 Feb 2022
Post-training Quantization for Neural Networks with Provable Guarantees
Post-training Quantization for Neural Networks with Provable Guarantees
Jinjie Zhang
Yixuan Zhou
Rayan Saab
MQ
23
31
0
26 Jan 2022
Neural Network Quantization with AI Model Efficiency Toolkit (AIMET)
Neural Network Quantization with AI Model Efficiency Toolkit (AIMET)
S. Siddegowda
Marios Fournarakis
Markus Nagel
Tijmen Blankevoort
Chirag I. Patel
Abhijit Khobare
MQ
12
31
0
20 Jan 2022
DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to
  Power Next-Generation AI Scale
DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale
Samyam Rajbhandari
Conglong Li
Z. Yao
Minjia Zhang
Reza Yazdani Aminabadi
A. A. Awan
Jeff Rasley
Yuxiong He
32
283
0
14 Jan 2022
BMPQ: Bit-Gradient Sensitivity Driven Mixed-Precision Quantization of
  DNNs from Scratch
BMPQ: Bit-Gradient Sensitivity Driven Mixed-Precision Quantization of DNNs from Scratch
Souvik Kundu
Shikai Wang
Qirui Sun
P. Beerel
Massoud Pedram
MQ
20
18
0
24 Dec 2021
Neural Network Quantization for Efficient Inference: A Survey
Neural Network Quantization for Efficient Inference: A Survey
Olivia Weng
MQ
20
22
0
08 Dec 2021
Mixed Precision of Quantization of Transformer Language Models for
  Speech Recognition
Mixed Precision of Quantization of Transformer Language Models for Speech Recognition
Junhao Xu
Shoukang Hu
Jianwei Yu
Xunying Liu
Helen M. Meng
MQ
38
15
0
29 Nov 2021
Sharpness-aware Quantization for Deep Neural Networks
Sharpness-aware Quantization for Deep Neural Networks
Jing Liu
Jianfei Cai
Bohan Zhuang
MQ
27
24
0
24 Nov 2021
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Weixin Xu
Zipeng Feng
Shuangkang Fang
Song Yuan
Yi Yang
Shuchang Zhou
MQ
24
1
0
01 Nov 2021
Whole Brain Segmentation with Full Volume Neural Network
Whole Brain Segmentation with Full Volume Neural Network
Yeshu Li
Jianwei Cui
Yilun Sheng
Xiao Liang
Jingdong Wang
E. Chang
Yan Xu
32
11
0
29 Oct 2021
Towards Mixed-Precision Quantization of Neural Networks via Constrained
  Optimization
Towards Mixed-Precision Quantization of Neural Networks via Constrained Optimization
Weihan Chen
Peisong Wang
Jian Cheng
MQ
42
61
0
13 Oct 2021
Global Vision Transformer Pruning with Hessian-Aware Saliency
Global Vision Transformer Pruning with Hessian-Aware Saliency
Huanrui Yang
Hongxu Yin
Maying Shen
Pavlo Molchanov
Hai Helen Li
Jan Kautz
ViT
30
39
0
10 Oct 2021
Auto-Split: A General Framework of Collaborative Edge-Cloud AI
Auto-Split: A General Framework of Collaborative Edge-Cloud AI
Amin Banitalebi-Dehkordi
Naveen Vedula
J. Pei
Fei Xia
Lanjun Wang
Yong Zhang
22
89
0
30 Aug 2021
A White Paper on Neural Network Quantization
A White Paper on Neural Network Quantization
Markus Nagel
Marios Fournarakis
Rana Ali Amjad
Yelysei Bondarenko
M. V. Baalen
Tijmen Blankevoort
MQ
19
503
0
15 Jun 2021
Multiplierless MP-Kernel Machine For Energy-efficient Edge Devices
Multiplierless MP-Kernel Machine For Energy-efficient Edge Devices
Abhishek Ramdas Nair
P. Nath
S. Chakrabartty
Chetan Singh Thakur
42
14
0
03 Jun 2021
Differentiable Model Compression via Pseudo Quantization Noise
Differentiable Model Compression via Pseudo Quantization Noise
Alexandre Défossez
Yossi Adi
Gabriel Synnaeve
DiffM
MQ
15
47
0
20 Apr 2021
unzipFPGA: Enhancing FPGA-based CNN Engines with On-the-Fly Weights
  Generation
unzipFPGA: Enhancing FPGA-based CNN Engines with On-the-Fly Weights Generation
Stylianos I. Venieris
Javier Fernandez-Marques
Nicholas D. Lane
22
11
0
09 Mar 2021
hls4ml: An Open-Source Codesign Workflow to Empower Scientific Low-Power
  Machine Learning Devices
hls4ml: An Open-Source Codesign Workflow to Empower Scientific Low-Power Machine Learning Devices
F. Fahim
B. Hawks
C. Herwig
J. Hirschauer
S. Jindariani
...
J. Ngadiuba
Miaoyuan Liu
Duc Hoang
E. Kreinar
Zhenbin Wu
22
129
0
09 Mar 2021
Dynamic Precision Analog Computing for Neural Networks
Dynamic Precision Analog Computing for Neural Networks
Sahaj Garg
Joe Lou
Anirudh Jain
Mitchell Nahmias
37
33
0
12 Feb 2021
BinaryBERT: Pushing the Limit of BERT Quantization
BinaryBERT: Pushing the Limit of BERT Quantization
Haoli Bai
Wei Zhang
Lu Hou
Lifeng Shang
Jing Jin
Xin Jiang
Qun Liu
Michael Lyu
Irwin King
MQ
142
221
0
31 Dec 2020
Hybrid and Non-Uniform quantization methods using retro synthesis data
  for efficient inference
Hybrid and Non-Uniform quantization methods using retro synthesis data for efficient inference
Gvsl Tej Pratap
R. Kumar
MQ
16
1
0
26 Dec 2020
A Tiny CNN Architecture for Medical Face Mask Detection for
  Resource-Constrained Endpoints
A Tiny CNN Architecture for Medical Face Mask Detection for Resource-Constrained Endpoints
P. Mohan
A. Paul
Abhay Chirania
CVBM
16
48
0
30 Nov 2020
Bringing AI To Edge: From Deep Learning's Perspective
Bringing AI To Edge: From Deep Learning's Perspective
Di Liu
Hao Kong
Xiangzhong Luo
Weichen Liu
Ravi Subramaniam
52
116
0
25 Nov 2020
BARS: Joint Search of Cell Topology and Layout for Accurate and
  Efficient Binary ARchitectures
BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures
Tianchen Zhao
Xuefei Ning
Xiangsheng Shi
Songyi Yang
Shuang Liang
Peng Lei
Jianfei Chen
Huazhong Yang
Yu Wang
MQ
20
7
0
21 Nov 2020
MSP: An FPGA-Specific Mixed-Scheme, Multi-Precision Deep Neural Network
  Quantization Framework
MSP: An FPGA-Specific Mixed-Scheme, Multi-Precision Deep Neural Network Quantization Framework
Sung-En Chang
Yanyu Li
Mengshu Sun
Weiwen Jiang
Runbin Shi
Xue Lin
Yanzhi Wang
MQ
19
7
0
16 Sep 2020
The Hardware Lottery
The Hardware Lottery
Sara Hooker
25
202
0
14 Sep 2020
Transform Quantization for CNN (Convolutional Neural Network)
  Compression
Transform Quantization for CNN (Convolutional Neural Network) Compression
Sean I. Young
Wang Zhe
David S. Taubman
B. Girod
MQ
29
69
0
02 Sep 2020
Search What You Want: Barrier Panelty NAS for Mixed Precision
  Quantization
Search What You Want: Barrier Panelty NAS for Mixed Precision Quantization
Haibao Yu
Qi Han
Jianbo Li
Jianping Shi
Guangliang Cheng
Bin Fan
MQ
19
61
0
20 Jul 2020
Automatic heterogeneous quantization of deep neural networks for
  low-latency inference on the edge for particle detectors
Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors
C. Coelho
Aki Kuusela
Shane Li
Zhuang Hao
T. Aarrestad
Vladimir Loncar
J. Ngadiuba
M. Pierini
Adrian Alan Pol
S. Summers
MQ
24
175
0
15 Jun 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
45
98
0
05 Jun 2020
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
243
1,450
0
18 Mar 2020
Efficient Bitwidth Search for Practical Mixed Precision Neural Network
Efficient Bitwidth Search for Practical Mixed Precision Neural Network
Yuhang Li
Wei Wang
Haoli Bai
Ruihao Gong
Xin Dong
F. Yu
MQ
13
20
0
17 Mar 2020
Least squares binary quantization of neural networks
Least squares binary quantization of neural networks
Hadi Pouransari
Zhucheng Tu
Oncel Tuzel
MQ
17
32
0
09 Jan 2020
ZeroQ: A Novel Zero Shot Quantization Framework
ZeroQ: A Novel Zero Shot Quantization Framework
Yaohui Cai
Z. Yao
Zhen Dong
A. Gholami
Michael W. Mahoney
Kurt Keutzer
MQ
30
389
0
01 Jan 2020
HAWQ-V2: Hessian Aware trace-Weighted Quantization of Neural Networks
HAWQ-V2: Hessian Aware trace-Weighted Quantization of Neural Networks
Zhen Dong
Z. Yao
Yaohui Cai
Daiyaan Arfeen
A. Gholami
Michael W. Mahoney
Kurt Keutzer
MQ
26
274
0
10 Nov 2019
Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network
  Inference On Microcontrollers
Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers
Manuele Rusci
Alessandro Capotondi
Luca Benini
MQ
17
74
0
30 May 2019
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
319
1,049
0
10 Feb 2017
Previous
12