ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.05852
  4. Cited By
Apprentice: Using Knowledge Distillation Techniques To Improve
  Low-Precision Network Accuracy

Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy

15 November 2017
Asit K. Mishra
Debbie Marr
    FedML
ArXivPDFHTML

Papers citing "Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy"

9 / 59 papers shown
Title
Latent Weights Do Not Exist: Rethinking Binarized Neural Network
  Optimization
Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization
K. Helwegen
James Widdicombe
Lukas Geiger
Zechun Liu
K. Cheng
Roeland Nusselder
MQ
16
110
0
05 Jun 2019
Training Quantized Neural Networks with a Full-precision Auxiliary
  Module
Training Quantized Neural Networks with a Full-precision Auxiliary Module
Bohan Zhuang
Lingqiao Liu
Mingkui Tan
Chunhua Shen
Ian Reid
MQ
24
62
0
27 Mar 2019
Structured Binary Neural Networks for Accurate Image Classification and
  Semantic Segmentation
Structured Binary Neural Networks for Accurate Image Classification and Semantic Segmentation
Bohan Zhuang
Chunhua Shen
Mingkui Tan
Lingqiao Liu
Ian Reid
MQ
27
152
0
22 Nov 2018
Relaxed Quantization for Discretized Neural Networks
Relaxed Quantization for Discretized Neural Networks
Christos Louizos
M. Reisser
Tijmen Blankevoort
E. Gavves
Max Welling
MQ
25
131
0
03 Oct 2018
Simultaneously Optimizing Weight and Quantizer of Ternary Neural Network
  using Truncated Gaussian Approximation
Simultaneously Optimizing Weight and Quantizer of Ternary Neural Network using Truncated Gaussian Approximation
Zhezhi He
Deliang Fan
MQ
13
66
0
02 Oct 2018
Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Jungwook Choi
P. Chuang
Zhuo Wang
Swagath Venkataramani
Vijayalakshmi Srinivasan
K. Gopalakrishnan
MQ
11
75
0
17 Jul 2018
Quantizing deep convolutional networks for efficient inference: A
  whitepaper
Quantizing deep convolutional networks for efficient inference: A whitepaper
Raghuraman Krishnamoorthi
MQ
14
990
0
21 Jun 2018
Energy-Constrained Compression for Deep Neural Networks via Weighted
  Sparse Projection and Layer Input Masking
Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking
Haichuan Yang
Yuhao Zhu
Ji Liu
CVBM
12
36
0
12 Jun 2018
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
311
1,047
0
10 Feb 2017
Previous
12