Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1709.00513
Cited By
Training Shallow and Thin Networks for Acceleration via Knowledge Distillation with Conditional Adversarial Networks
2 September 2017
Zheng Xu
Yen-Chang Hsu
Jiawei Huang
GAN
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Training Shallow and Thin Networks for Acceleration via Knowledge Distillation with Conditional Adversarial Networks"
4 / 4 papers shown
Title
SlimNets: An Exploration of Deep Model Compression and Acceleration
Ini Oguntola
Subby Olubeko
Chris Sweeney
10
11
0
01 Aug 2018
Knowledge Distillation with Adversarial Samples Supporting Decision Boundary
Byeongho Heo
Minsik Lee
Sangdoo Yun
J. Choi
AAML
21
146
0
15 May 2018
Conditional Image Synthesis With Auxiliary Classifier GANs
Augustus Odena
C. Olah
Jonathon Shlens
GAN
238
3,190
0
30 Oct 2016
Benefits of depth in neural networks
Matus Telgarsky
142
602
0
14 Feb 2016
1