Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1505.04630
Cited By
Recurrent Neural Network Training with Dark Knowledge Transfer
18 May 2015
Zhiyuan Tang
Dong Wang
Zhiyong Zhang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Recurrent Neural Network Training with Dark Knowledge Transfer"
34 / 34 papers shown
Title
Generalizable Heterogeneous Federated Cross-Correlation and Instance Similarity Learning
Wenke Huang
J. J. Valero-Mas
Dasaem Jeong
Bo Du
FedML
38
44
0
28 Sep 2023
Multi-Label Knowledge Distillation
Penghui Yang
Ming-Kun Xie
Chen-Chen Zong
Lei Feng
Gang Niu
Masashi Sugiyama
Sheng-Jun Huang
36
10
0
12 Aug 2023
Knowledge Distillation Leveraging Alternative Soft Targets from Non-Parallel Qualified Speech Data
Tohru Nagano
Takashi Fukuda
Gakuto Kurata
16
1
0
16 Dec 2021
DarkGAN: Exploiting Knowledge Distillation for Comprehensible Audio Synthesis with GANs
J. Nistal
Stefan Lattner
G. Richard
21
8
0
03 Aug 2021
Warming up recurrent neural networks to maximise reachable multistability greatly improves learning
Gaspard Lambrechts
Florent De Geeter
Nicolas Vecoven
D. Ernst
G. Drion
21
2
0
02 Jun 2021
Towards Understanding Knowledge Distillation
Mary Phuong
Christoph H. Lampert
9
310
0
27 May 2021
Spectral Pruning for Recurrent Neural Networks
Takashi Furuya
Kazuma Suetake
K. Taniguchi
Hiroyuki Kusumoto
Ryuji Saiin
Tomohiro Daimon
27
4
0
23 May 2021
Copycat CNN: Are Random Non-Labeled Data Enough to Steal Knowledge from Black-box Models?
Jacson Rodrigues Correia-Silva
Rodrigo Berriel
C. Badue
Alberto F. de Souza
Thiago Oliveira-Santos
MLAU
13
14
0
21 Jan 2021
Relation Clustering in Narrative Knowledge Graphs
Simone Mellace
K. Vani
Alessandro Antonucci
10
7
0
27 Nov 2020
A Model Compression Method with Matrix Product Operators for Speech Enhancement
Xingwei Sun
Ze-Feng Gao
Zhong-Yi Lu
Junfeng Li
Yonghong Yan
11
23
0
10 Oct 2020
Tracking-by-Trackers with a Distilled and Reinforced Model
Matteo Dunnhofer
N. Martinel
C. Micheloni
VOT
OffRL
27
4
0
08 Jul 2020
LabelEnc: A New Intermediate Supervision Method for Object Detection
Miao Hao
Yitao Liu
Xinming Zhang
Jian Sun
22
25
0
07 Jul 2020
Neuroevolutionary Transfer Learning of Deep Recurrent Neural Networks through Network-Aware Adaptation
A. ElSaid
Joshua Karns
Alexander Ororbia
Daniel E. Krutz
Zimeng Lyu
Travis J. Desell
8
0
0
04 Jun 2020
Why distillation helps: a statistical perspective
A. Menon
A. S. Rawat
Sashank J. Reddi
Seungyeon Kim
Sanjiv Kumar
FedML
25
22
0
21 May 2020
Heterogeneous Knowledge Distillation using Information Flow Modeling
Nikolaos Passalis
Maria Tzelepi
Anastasios Tefas
16
138
0
02 May 2020
Filter Grafting for Deep Neural Networks: Reason, Method, and Cultivation
Hao Cheng
Fanxu Meng
Ke Li
Huixiang Luo
Guangming Lu
Xing Sun
Feiyue Huang
8
0
0
26 Apr 2020
Knowledge distillation for optimization of quantized deep neural networks
Sungho Shin
Yoonho Boo
Wonyong Sung
MQ
11
6
0
04 Sep 2019
Memory- and Communication-Aware Model Compression for Distributed Deep Learning Inference on IoT
Kartikeya Bhardwaj
Chingyi Lin
A. L. Sartor
R. Marculescu
GNN
13
51
0
26 Jul 2019
The Pupil Has Become the Master: Teacher-Student Model-Based Word Embedding Distillation with Ensemble Learning
Bonggun Shin
Hao Yang
Jinho Choi
7
12
0
31 May 2019
Conditional Teacher-Student Learning
Zhong Meng
Jinyu Li
Yong Zhao
Jiawei Liu
22
90
0
28 Apr 2019
LoANs: Weakly Supervised Object Detection with Localizer Assessor Networks
Christian Bartz
Haojin Yang
Joseph Bethge
Christoph Meinel
17
2
0
14 Nov 2018
Training Recurrent Neural Networks via Dynamical Trajectory-Based Optimization
H. Khodabandehlou
M. S. Fadali
18
3
0
10 May 2018
Learning Deep Representations with Probabilistic Knowledge Transfer
Nikolaos Passalis
Anastasios Tefas
31
406
0
28 Mar 2018
Tensor Decomposition for Compressing Recurrent Neural Network
Andros Tjandra
S. Sakti
Satoshi Nakamura
11
52
0
28 Feb 2018
Differentially Private Distributed Learning for Language Modeling Tasks
Vadim Popov
Mikhail Kudinov
Irina Piontkovskaya
Petr Vytovtov
A. Nevidomsky
FedML
35
3
0
20 Dec 2017
Bolt: Accelerated Data Mining with Fast Vector Compression
Davis W. Blalock
John Guttag
MQ
12
30
0
30 Jun 2017
Compressing Recurrent Neural Network with Tensor Train
Andros Tjandra
S. Sakti
Satoshi Nakamura
28
109
0
23 May 2017
Generative Knowledge Transfer for Neural Language Models
Sungho Shin
Kyuyeon Hwang
Wonyong Sung
10
12
0
14 Aug 2016
Diving deeper into mentee networks
Ragav Venkatesan
Baoxin Li
6
14
0
27 Apr 2016
Blending LSTMs into CNNs
Krzysztof J. Geras
Abdel-rahman Mohamed
R. Caruana
G. Urban
Shengjie Wang
Ozlem Aslan
Matthai Philipose
Matthew Richardson
Charles Sutton
19
60
0
19 Nov 2015
Policy Distillation
Andrei A. Rusu
Sergio Gomez Colmenarejo
Çağlar Gülçehre
Guillaume Desjardins
J. Kirkpatrick
Razvan Pascanu
Volodymyr Mnih
Koray Kavukcuoglu
R. Hadsell
22
680
0
19 Nov 2015
Transfer Learning for Speech and Language Processing
Dong Wang
T. Zheng
29
218
0
19 Nov 2015
Learning from LDA using Deep Neural Networks
Dongxu Zhang
Tianyi Luo
Dong Wang
Rong Liu
BDL
34
23
0
05 Aug 2015
Distilling Word Embeddings: An Encoding Approach
Lili Mou
Ran Jia
Yan Xu
Ge Li
Lu Zhang
Zhi Jin
FedML
24
27
0
15 Jun 2015
1