ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1709.00513
  4. Cited By
Training Shallow and Thin Networks for Acceleration via Knowledge
  Distillation with Conditional Adversarial Networks

Training Shallow and Thin Networks for Acceleration via Knowledge Distillation with Conditional Adversarial Networks

2 September 2017
Zheng Xu
Yen-Chang Hsu
Jiawei Huang
    GAN
ArXivPDFHTML

Papers citing "Training Shallow and Thin Networks for Acceleration via Knowledge Distillation with Conditional Adversarial Networks"

4 / 4 papers shown
Title
SlimNets: An Exploration of Deep Model Compression and Acceleration
SlimNets: An Exploration of Deep Model Compression and Acceleration
Ini Oguntola
Subby Olubeko
Chris Sweeney
13
11
0
01 Aug 2018
Knowledge Distillation with Adversarial Samples Supporting Decision
  Boundary
Knowledge Distillation with Adversarial Samples Supporting Decision Boundary
Byeongho Heo
Minsik Lee
Sangdoo Yun
J. Choi
AAML
21
146
0
15 May 2018
Conditional Image Synthesis With Auxiliary Classifier GANs
Conditional Image Synthesis With Auxiliary Classifier GANs
Augustus Odena
C. Olah
Jonathon Shlens
GAN
238
3,190
0
30 Oct 2016
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
148
602
0
14 Feb 2016
1