ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.01688
  4. Cited By
Knowledge distillation for optimization of quantized deep neural
  networks

Knowledge distillation for optimization of quantized deep neural networks

4 September 2019
Sungho Shin
Yoonho Boo
Wonyong Sung
    MQ
ArXivPDFHTML

Papers citing "Knowledge distillation for optimization of quantized deep neural networks"

4 / 4 papers shown
Title
ERNIE-Tiny : A Progressive Distillation Framework for Pretrained
  Transformer Compression
ERNIE-Tiny : A Progressive Distillation Framework for Pretrained Transformer Compression
Weiyue Su
Xuyi Chen
Shi Feng
Jiaxiang Liu
Weixin Liu
Yu Sun
Hao Tian
Hua Wu
Haifeng Wang
34
13
0
04 Jun 2021
Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized
  Deep Neural Networks
Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks
Yoonho Boo
Sungho Shin
Jungwook Choi
Wonyong Sung
MQ
30
29
0
30 Sep 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,851
0
09 Jun 2020
Neural Compatibility Modeling with Attentive Knowledge Distillation
Neural Compatibility Modeling with Attentive Knowledge Distillation
Xuemeng Song
Fuli Feng
Xianjing Han
Xin Yang
Wei Liu
Liqiang Nie
45
144
0
17 Apr 2018
1