ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.11487
  4. Cited By
Paying more attention to snapshots of Iterative Pruning: Improving Model
  Compression via Ensemble Distillation

Paying more attention to snapshots of Iterative Pruning: Improving Model Compression via Ensemble Distillation

20 June 2020
Duong H. Le
Vo Trung Nhan
N. Thoai
    VLM
ArXivPDFHTML

Papers citing "Paying more attention to snapshots of Iterative Pruning: Improving Model Compression via Ensemble Distillation"

6 / 6 papers shown
Title
Characterizing Disparity Between Edge Models and High-Accuracy Base
  Models for Vision Tasks
Characterizing Disparity Between Edge Models and High-Accuracy Base Models for Vision Tasks
Zhenyu Wang
S. Nirjon
24
0
0
13 Jul 2024
Learning to Project for Cross-Task Knowledge Distillation
Learning to Project for Cross-Task Knowledge Distillation
Dylan Auty
Roy Miles
Benedikt Kolbeinsson
K. Mikolajczyk
40
0
0
21 Mar 2024
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
224
382
0
05 Mar 2020
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
192
473
0
12 Jun 2018
Large scale distributed neural network training through online
  distillation
Large scale distributed neural network training through online distillation
Rohan Anil
Gabriel Pereyra
Alexandre Passos
Róbert Ormándi
George E. Dahl
Geoffrey E. Hinton
FedML
275
404
0
09 Apr 2018
Simple and Scalable Predictive Uncertainty Estimation using Deep
  Ensembles
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
273
5,660
0
05 Dec 2016
1