Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1711.05852
Cited By
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
15 November 2017
Asit K. Mishra
Debbie Marr
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy"
9 / 59 papers shown
Title
Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization
K. Helwegen
James Widdicombe
Lukas Geiger
Zechun Liu
K. Cheng
Roeland Nusselder
MQ
16
110
0
05 Jun 2019
Training Quantized Neural Networks with a Full-precision Auxiliary Module
Bohan Zhuang
Lingqiao Liu
Mingkui Tan
Chunhua Shen
Ian Reid
MQ
24
62
0
27 Mar 2019
Structured Binary Neural Networks for Accurate Image Classification and Semantic Segmentation
Bohan Zhuang
Chunhua Shen
Mingkui Tan
Lingqiao Liu
Ian Reid
MQ
27
152
0
22 Nov 2018
Relaxed Quantization for Discretized Neural Networks
Christos Louizos
M. Reisser
Tijmen Blankevoort
E. Gavves
Max Welling
MQ
25
131
0
03 Oct 2018
Simultaneously Optimizing Weight and Quantizer of Ternary Neural Network using Truncated Gaussian Approximation
Zhezhi He
Deliang Fan
MQ
13
66
0
02 Oct 2018
Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Jungwook Choi
P. Chuang
Zhuo Wang
Swagath Venkataramani
Vijayalakshmi Srinivasan
K. Gopalakrishnan
MQ
11
75
0
17 Jul 2018
Quantizing deep convolutional networks for efficient inference: A whitepaper
Raghuraman Krishnamoorthi
MQ
14
990
0
21 Jun 2018
Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking
Haichuan Yang
Yuhao Zhu
Ji Liu
CVBM
12
36
0
12 Jun 2018
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
311
1,047
0
10 Feb 2017
Previous
1
2