ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.08635
  4. Cited By
Loss-aware Weight Quantization of Deep Networks

Loss-aware Weight Quantization of Deep Networks

23 February 2018
Lu Hou
James T. Kwok
    MQ
ArXivPDFHTML

Papers citing "Loss-aware Weight Quantization of Deep Networks"

19 / 69 papers shown
Title
Balanced Binary Neural Networks with Gated Residual
Balanced Binary Neural Networks with Gated Residual
Mingzhu Shen
Xianglong Liu
Ruihao Gong
Kai Han
MQ
9
33
0
26 Sep 2019
Accurate and Compact Convolutional Neural Networks with Trained
  Binarization
Accurate and Compact Convolutional Neural Networks with Trained Binarization
Zhe Xu
R. Cheung
MQ
19
54
0
25 Sep 2019
Structured Binary Neural Networks for Image Recognition
Structured Binary Neural Networks for Image Recognition
Bohan Zhuang
Chunhua Shen
Mingkui Tan
Peng Chen
Lingqiao Liu
Ian Reid
MQ
22
17
0
22 Sep 2019
Effective Training of Convolutional Neural Networks with Low-bitwidth
  Weights and Activations
Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations
Bohan Zhuang
Jing Liu
Mingkui Tan
Lingqiao Liu
Ian Reid
Chunhua Shen
MQ
26
44
0
10 Aug 2019
Efficient 8-Bit Quantization of Transformer Neural Machine Language
  Translation Model
Efficient 8-Bit Quantization of Transformer Neural Machine Language Translation Model
Aishwarya Bhandare
Vamsi Sripathi
Deepthi Karkada
Vivek V. Menon
Sun Choi
Kushal Datta
V. Saletore
MQ
14
129
0
03 Jun 2019
SinReQ: Generalized Sinusoidal Regularization for Low-Bitwidth Deep
  Quantized Training
SinReQ: Generalized Sinusoidal Regularization for Low-Bitwidth Deep Quantized Training
Ahmed T. Elthakeb
Prannoy Pilligundla
H. Esmaeilzadeh
MQ
15
9
0
04 May 2019
Towards Efficient Model Compression via Learned Global Ranking
Towards Efficient Model Compression via Learned Global Ranking
Ting-Wu Chin
Ruizhou Ding
Cha Zhang
Diana Marculescu
13
170
0
28 Apr 2019
MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning
MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning
Zechun Liu
Haoyuan Mu
Xiangyu Zhang
Zichao Guo
Xin Yang
K. Cheng
Jian-jun Sun
6
554
0
25 Mar 2019
Understanding Straight-Through Estimator in Training Activation
  Quantized Neural Nets
Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets
Penghang Yin
J. Lyu
Shuai Zhang
Stanley Osher
Y. Qi
Jack Xin
MQ
LLMSV
19
305
0
13 Mar 2019
Structured Binary Neural Networks for Accurate Image Classification and
  Semantic Segmentation
Structured Binary Neural Networks for Accurate Image Classification and Semantic Segmentation
Bohan Zhuang
Chunhua Shen
Mingkui Tan
Lingqiao Liu
Ian Reid
MQ
27
152
0
22 Nov 2018
ProxQuant: Quantized Neural Networks via Proximal Operators
ProxQuant: Quantized Neural Networks via Proximal Operators
Yu Bai
Yu-Xiang Wang
Edo Liberty
MQ
11
117
0
01 Oct 2018
Learning Recurrent Binary/Ternary Weights
Learning Recurrent Binary/Ternary Weights
A. Ardakani
Zhengyun Ji
S. C. Smithson
B. Meyer
W. Gross
MQ
4
27
0
28 Sep 2018
Learning Sparse Low-Precision Neural Networks With Learnable
  Regularization
Learning Sparse Low-Precision Neural Networks With Learnable Regularization
Yoojin Choi
Mostafa El-Khamy
Jungwon Lee
MQ
19
31
0
01 Sep 2018
A Survey on Methods and Theories of Quantized Neural Networks
A Survey on Methods and Theories of Quantized Neural Networks
Yunhui Guo
MQ
27
230
0
13 Aug 2018
Training Compact Neural Networks with Binary Weights and Low Precision
  Activations
Training Compact Neural Networks with Binary Weights and Low Precision Activations
Bohan Zhuang
Chunhua Shen
Ian Reid
MQ
13
14
0
08 Aug 2018
LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep
  Neural Networks
LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks
Dongqing Zhang
Jiaolong Yang
Dongqiangzi Ye
G. Hua
MQ
9
696
0
26 Jul 2018
NullaNet: Training Deep Neural Networks for Reduced-Memory-Access
  Inference
NullaNet: Training Deep Neural Networks for Reduced-Memory-Access Inference
M. Nazemi
Ghasem Pasandi
Massoud Pedram
13
20
0
23 Jul 2018
Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Jungwook Choi
P. Chuang
Zhuo Wang
Swagath Venkataramani
Vijayalakshmi Srinivasan
K. Gopalakrishnan
MQ
11
75
0
17 Jul 2018
Universal Deep Neural Network Compression
Universal Deep Neural Network Compression
Yoojin Choi
Mostafa El-Khamy
Jungwon Lee
MQ
81
85
0
07 Feb 2018
Previous
12