Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.11487
Cited By
Paying more attention to snapshots of Iterative Pruning: Improving Model Compression via Ensemble Distillation
20 June 2020
Duong H. Le
Vo Trung Nhan
N. Thoai
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Paying more attention to snapshots of Iterative Pruning: Improving Model Compression via Ensemble Distillation"
6 / 6 papers shown
Title
Characterizing Disparity Between Edge Models and High-Accuracy Base Models for Vision Tasks
Zhenyu Wang
S. Nirjon
24
0
0
13 Jul 2024
Learning to Project for Cross-Task Knowledge Distillation
Dylan Auty
Roy Miles
Benedikt Kolbeinsson
K. Mikolajczyk
40
0
0
21 Mar 2024
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
224
382
0
05 Mar 2020
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
192
473
0
12 Jun 2018
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
275
404
0
09 Apr 2018
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
273
5,660
0
05 Dec 2016
1