Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1812.00914
Cited By
Accelerating Large Scale Knowledge Distillation via Dynamic Importance Sampling
3 December 2018
Minghan Li
Tanli Zuo
Ruicheng Li
Martha White
Weishi Zheng
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Accelerating Large Scale Knowledge Distillation via Dynamic Importance Sampling"
2 / 2 papers shown
Title
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
278
404
0
09 Apr 2018
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,743
0
26 Sep 2016
1