Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2012.03096
Cited By
Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression
5 December 2020
Cody Blakeney
Xiaomin Li
Yan Yan
Ziliang Zong
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression"
10 / 10 papers shown
Title
Knowledge Distillation: Enhancing Neural Network Compression with Integrated Gradients
David E. Hernandez
J. Chang
Torbjörn E. M. Nordling
56
0
0
17 Mar 2025
Computer Vision Model Compression Techniques for Embedded Systems: A Survey
Alexandre Lopes
Fernando Pereira dos Santos
D. Oliveira
Mauricio Schiezaro
Hélio Pedrini
26
5
0
15 Aug 2024
FedD2S: Personalized Data-Free Federated Knowledge Distillation
Kawa Atapour
S. J. Seyedmohammadi
J. Abouei
Arash Mohammadi
Konstantinos N. Plataniotis
FedML
22
2
0
16 Feb 2024
DONNAv2 -- Lightweight Neural Architecture Search for Vision tasks
Sweta Priyadarshi
Tianyu Jiang
Hsin-Pai Cheng
S. Rama Krishna
Viswanath Ganapathy
C. Patel
31
0
0
26 Sep 2023
TFormer: A Transmission-Friendly ViT Model for IoT Devices
Zhichao Lu
Chuntao Ding
Felix Juefei Xu
Vishnu Naresh Boddeti
Shangguang Wang
Yun Yang
19
13
0
15 Feb 2023
Pipe-BD: Pipelined Parallel Blockwise Distillation
Hongsun Jang
Jaewon Jung
Jaeyong Song
Joonsang Yu
Youngsok Kim
Jinho Lee
MoE
AI4CE
31
2
0
29 Jan 2023
Data-Independent Structured Pruning of Neural Networks via Coresets
Ben Mussay
Dan Feldman
Samson Zhou
Vladimir Braverman
Margarita Osadchy
12
25
0
19 Aug 2020
Synthetic Depth-of-Field with a Single-Camera Mobile Phone
Neal Wadhwa
Rahul Garg
David E. Jacobs
Bryan E. Feldman
Nori Kanazawa
Robert E. Carroll
Yair Movshovitz-Attias
Jonathan T. Barron
Yael Pritch
M. Levoy
3DH
MDE
179
176
0
11 Jun 2018
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,743
0
26 Sep 2016
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,888
0
15 Sep 2016
1