ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1312.6184
  4. Cited By
Do Deep Nets Really Need to be Deep?

Do Deep Nets Really Need to be Deep?

21 December 2013
Lei Jimmy Ba
R. Caruana
ArXivPDFHTML

Papers citing "Do Deep Nets Really Need to be Deep?"

50 / 337 papers shown
Title
Filter Distillation for Network Compression
Filter Distillation for Network Compression
Xavier Suau
Luca Zappella
N. Apostoloff
24
38
0
20 Jul 2018
Toward Interpretable Deep Reinforcement Learning with Linear Model
  U-Trees
Toward Interpretable Deep Reinforcement Learning with Linear Model U-Trees
Guiliang Liu
Oliver Schulte
Wang Zhu
Qingcan Li
AI4CE
15
134
0
16 Jul 2018
FATE: Fast and Accurate Timing Error Prediction Framework for Low Power
  DNN Accelerator Design
FATE: Fast and Accurate Timing Error Prediction Framework for Low Power DNN Accelerator Design
J. Zhang
S. Garg
11
21
0
02 Jul 2018
Modality Distillation with Multiple Stream Networks for Action
  Recognition
Modality Distillation with Multiple Stream Networks for Action Recognition
Nuno C. Garcia
Pietro Morerio
Vittorio Murino
30
180
0
19 Jun 2018
Energy-Constrained Compression for Deep Neural Networks via Weighted
  Sparse Projection and Layer Input Masking
Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking
Haichuan Yang
Yuhao Zhu
Ji Liu
CVBM
14
36
0
12 Jun 2018
Distilling Knowledge for Search-based Structured Prediction
Distilling Knowledge for Search-based Structured Prediction
Yijia Liu
Wanxiang Che
Huaipeng Zhao
Bing Qin
Ting Liu
27
22
0
29 May 2018
Tensorial Neural Networks: Generalization of Neural Networks and
  Application to Model Compression
Tensorial Neural Networks: Generalization of Neural Networks and Application to Model Compression
Jiahao Su
Jingling Li
Bobby Bhattacharjee
Furong Huang
16
20
0
25 May 2018
Deploy Large-Scale Deep Neural Networks in Resource Constrained IoT
  Devices with Local Quantization Region
Deploy Large-Scale Deep Neural Networks in Resource Constrained IoT Devices with Local Quantization Region
Yi Yang
A. Chen
Xiaoming Chen
Jiang Ji
Zhenyang Chen
Yan Dai
MQ
13
11
0
24 May 2018
Nonparametric Bayesian Deep Networks with Local Competition
Nonparametric Bayesian Deep Networks with Local Competition
Konstantinos P. Panousis
S. Chatzis
Sergios Theodoridis
BDL
22
32
0
19 May 2018
Object detection at 200 Frames Per Second
Object detection at 200 Frames Per Second
Rakesh Mehta
Cemalettin Öztürk
ObjD
30
61
0
16 May 2018
Hu-Fu: Hardware and Software Collaborative Attack Framework against
  Neural Networks
Hu-Fu: Hardware and Software Collaborative Attack Framework against Neural Networks
Wenshuo Li
Jincheng Yu
Xuefei Ning
Pengjun Wang
Qi Wei
Yu Wang
Huazhong Yang
AAML
31
61
0
14 May 2018
Born Again Neural Networks
Born Again Neural Networks
Tommaso Furlanello
Zachary Chase Lipton
Michael Tschannen
Laurent Itti
Anima Anandkumar
36
1,020
0
12 May 2018
I Have Seen Enough: A Teacher Student Network for Video Classification
  Using Fewer Frames
I Have Seen Enough: A Teacher Student Network for Video Classification Using Fewer Frames
S. Bhardwaj
Mitesh M. Khapra
23
3
0
12 May 2018
Boosting Self-Supervised Learning via Knowledge Transfer
Boosting Self-Supervised Learning via Knowledge Transfer
M. Noroozi
Ananth Vinjimoor
Paolo Favaro
Hamed Pirsiavash
SSL
212
292
0
01 May 2018
The History Began from AlexNet: A Comprehensive Survey on Deep Learning
  Approaches
The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches
Md. Zahangir Alom
T. Taha
C. Yakopcic
Stefan Westberg
P. Sidike
Mst Shamima Nasrin
B. Van Essen
A. Awwal
V. Asari
VLM
29
873
0
03 Mar 2018
Demystifying Parallel and Distributed Deep Learning: An In-Depth
  Concurrency Analysis
Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
Tal Ben-Nun
Torsten Hoefler
GNN
33
702
0
26 Feb 2018
The Description Length of Deep Learning Models
The Description Length of Deep Learning Models
Léonard Blier
Yann Ollivier
32
95
0
20 Feb 2018
ThUnderVolt: Enabling Aggressive Voltage Underscaling and Timing Error
  Resilience for Energy Efficient Deep Neural Network Accelerators
ThUnderVolt: Enabling Aggressive Voltage Underscaling and Timing Error Resilience for Energy Efficient Deep Neural Network Accelerators
Jeff Zhang
Kartheek Rangineni
Zahra Ghodsi
S. Garg
28
117
0
11 Feb 2018
Analyzing and Mitigating the Impact of Permanent Faults on a Systolic
  Array Based Neural Network Accelerator
Analyzing and Mitigating the Impact of Permanent Faults on a Systolic Array Based Neural Network Accelerator
Jeff Zhang
Tianyu Gu
K. Basu
S. Garg
6
134
0
11 Feb 2018
Few-shot learning of neural networks from scratch by pseudo example
  optimization
Few-shot learning of neural networks from scratch by pseudo example optimization
Akisato Kimura
Zoubin Ghahramani
Koh Takeuchi
Tomoharu Iwata
N. Ueda
35
52
0
08 Feb 2018
Digital Watermarking for Deep Neural Networks
Digital Watermarking for Deep Neural Networks
Yuki Nagai
Yusuke Uchida
S. Sakazawa
Shiníchi Satoh
WIGM
23
143
0
06 Feb 2018
Recovering from Random Pruning: On the Plasticity of Deep Convolutional
  Neural Networks
Recovering from Random Pruning: On the Plasticity of Deep Convolutional Neural Networks
Deepak Mittal
S. Bhardwaj
Mitesh M. Khapra
Balaraman Ravindran
VLM
36
65
0
31 Jan 2018
Focus: Querying Large Video Datasets with Low Latency and Low Cost
Focus: Querying Large Video Datasets with Low Latency and Low Cost
Kevin Hsieh
Ganesh Ananthanarayanan
P. Bodík
P. Bahl
Matthai Philipose
Phillip B. Gibbons
O. Mutlu
16
275
0
10 Jan 2018
Improving the Adversarial Robustness and Interpretability of Deep Neural
  Networks by Regularizing their Input Gradients
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
A. Ross
Finale Doshi-Velez
AAML
37
675
0
26 Nov 2017
Knowledge Concentration: Learning 100K Object Classifiers in a Single
  CNN
Knowledge Concentration: Learning 100K Object Classifiers in a Single CNN
J. Gao
Zijian
Guo
Zerui Li
Ram Nevatia
VLM
20
20
0
21 Nov 2017
Moonshine: Distilling with Cheap Convolutions
Moonshine: Distilling with Cheap Convolutions
Elliot J. Crowley
Gavia Gray
Amos Storkey
27
120
0
07 Nov 2017
Towards Effective Low-bitwidth Convolutional Neural Networks
Towards Effective Low-bitwidth Convolutional Neural Networks
Bohan Zhuang
Chunhua Shen
Mingkui Tan
Lingqiao Liu
Ian Reid
MQ
31
231
0
01 Nov 2017
Interpretation of Neural Networks is Fragile
Interpretation of Neural Networks is Fragile
Amirata Ghorbani
Abubakar Abid
James Zou
FAtt
AAML
71
857
0
29 Oct 2017
Knowledge Projection for Deep Neural Networks
Knowledge Projection for Deep Neural Networks
Zhi Zhang
G. Ning
Zhihai He
38
15
0
26 Oct 2017
Trace norm regularization and faster inference for embedded speech
  recognition RNNs
Trace norm regularization and faster inference for embedded speech recognition RNNs
Markus Kliegl
Siddharth Goyal
Kexin Zhao
Kavya Srinet
M. Shoeybi
26
8
0
25 Oct 2017
Data-Free Knowledge Distillation for Deep Neural Networks
Data-Free Knowledge Distillation for Deep Neural Networks
Raphael Gontijo-Lopes
Stefano Fenu
Thad Starner
22
270
0
19 Oct 2017
Deep Learning Techniques for Music Generation -- A Survey
Deep Learning Techniques for Music Generation -- A Survey
Jean-Pierre Briot
Gaëtan Hadjeres
F. Pachet
MGen
37
297
0
05 Sep 2017
Sequence Prediction with Neural Segmental Models
Sequence Prediction with Neural Segmental Models
Hao Tang
29
2
0
05 Sep 2017
Interpretability via Model Extraction
Interpretability via Model Extraction
Osbert Bastani
Carolyn Kim
Hamsa Bastani
FAtt
16
129
0
29 Jun 2017
Iterative Machine Teaching
Iterative Machine Teaching
Weiyang Liu
Bo Dai
Ahmad Humayun
C. Tay
Chen Yu
Linda B. Smith
James M. Rehg
Le Song
26
140
0
30 May 2017
Kronecker Recurrent Units
Kronecker Recurrent Units
C. Jose
Moustapha Cissé
F. Fleuret
ODL
24
45
0
29 May 2017
Bayesian Compression for Deep Learning
Bayesian Compression for Deep Learning
Christos Louizos
Karen Ullrich
Max Welling
UQCV
BDL
23
479
0
24 May 2017
Interpreting Blackbox Models via Model Extraction
Interpreting Blackbox Models via Model Extraction
Osbert Bastani
Carolyn Kim
Hamsa Bastani
FAtt
27
170
0
23 May 2017
Compressing Recurrent Neural Network with Tensor Train
Compressing Recurrent Neural Network with Tensor Train
Andros Tjandra
S. Sakti
Satoshi Nakamura
21
109
0
23 May 2017
Hardware-Software Codesign of Accurate, Multiplier-free Deep Neural
  Networks
Hardware-Software Codesign of Accurate, Multiplier-free Deep Neural Networks
Hokchhay Tann
S. Hashemi
Iris Bahar
Sherief Reda
MQ
14
74
0
11 May 2017
Knowledge-Guided Deep Fractal Neural Networks for Human Pose Estimation
Knowledge-Guided Deep Fractal Neural Networks for Human Pose Estimation
G. Ning
Zhi Zhang
Zhiquan He
GAN
29
169
0
05 May 2017
A Teacher-Student Framework for Zero-Resource Neural Machine Translation
A Teacher-Student Framework for Zero-Resource Neural Machine Translation
Yun Chen
Yang Liu
Yong Cheng
V. Li
35
147
0
02 May 2017
The loss surface of deep and wide neural networks
The loss surface of deep and wide neural networks
Quynh N. Nguyen
Matthias Hein
ODL
45
283
0
26 Apr 2017
Deep Architectures for Modulation Recognition
Deep Architectures for Modulation Recognition
Nathan E. West
Tim O'Shea
19
401
0
27 Mar 2017
Predicting Deeper into the Future of Semantic Segmentation
Predicting Deeper into the Future of Semantic Segmentation
Pauline Luc
Natalia Neverova
Camille Couprie
Jakob Verbeek
Yann LeCun
23
242
0
22 Mar 2017
Knowledge distillation using unlabeled mismatched images
Knowledge distillation using unlabeled mismatched images
Mandar M. Kulkarni
Kalpesh Patil
Shirish S. Karande
28
16
0
21 Mar 2017
Neurogenesis-Inspired Dictionary Learning: Online Model Adaption in a
  Changing World
Neurogenesis-Inspired Dictionary Learning: Online Model Adaption in a Changing World
S. Garg
Irina Rish
Guillermo Cecchi
A. Lozano
OffRL
CLL
28
6
0
22 Jan 2017
Learning From Noisy Large-Scale Datasets With Minimal Supervision
Learning From Noisy Large-Scale Datasets With Minimal Supervision
Andreas Veit
N. Alldrin
Gal Chechik
Ivan Krasin
Abhinav Gupta
Serge J. Belongie
23
476
0
06 Jan 2017
Paying More Attention to Attention: Improving the Performance of
  Convolutional Neural Networks via Attention Transfer
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
Sergey Zagoruyko
N. Komodakis
14
2,550
0
12 Dec 2016
In Teacher We Trust: Learning Compressed Models for Pedestrian Detection
In Teacher We Trust: Learning Compressed Models for Pedestrian Detection
Jonathan Shen
Noranart Vesdapunt
Vishnu Naresh Boddeti
Kris M. Kitani
13
29
0
01 Dec 2016
Previous
1234567
Next