ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.08097
  4. Cited By
Ensemble Knowledge Distillation for Learning Improved and Efficient
  Networks
v1v2v3 (latest)

Ensemble Knowledge Distillation for Learning Improved and Efficient Networks

European Conference on Artificial Intelligence (ECAI), 2019
17 September 2019
Umar Asif
Jianbin Tang
S. Harrer
    FedML
ArXiv (abs)PDFHTML

Papers citing "Ensemble Knowledge Distillation for Learning Improved and Efficient Networks"

41 / 41 papers shown
Mitigating Negative Flips via Margin Preserving Training
Mitigating Negative Flips via Margin Preserving Training
Simone Ricci
Niccoló Biondi
F. Pernici
Alberto Del Bimbo
VLM
220
0
0
11 Nov 2025
Online Clustering of Seafloor Imagery for Interpretation during Long-Term AUV Operations
Online Clustering of Seafloor Imagery for Interpretation during Long-Term AUV Operations
Cailei Liang
Adrian Bodenmann
Sam Fenton
Blair Thornton
180
1
0
08 Sep 2025
Semi-Supervised Learning with Online Knowledge Distillation for Skin Lesion Classification
Semi-Supervised Learning with Online Knowledge Distillation for Skin Lesion Classification
Siyamalan Manivannan
125
0
0
15 Aug 2025
Corrected with the Latest Version: Make Robust Asynchronous Federated Learning Possible
Corrected with the Latest Version: Make Robust Asynchronous Federated Learning Possible
Chaoyi Lu
Yiding Sun
Pengbo Li
Zhichuan Yang
FedML
449
3
0
05 Apr 2025
MIND: Modality-Informed Knowledge Distillation Framework for Multimodal Clinical Prediction Tasks
MIND: Modality-Informed Knowledge Distillation Framework for Multimodal Clinical Prediction Tasks
Alejandro Guerra-Manzanares
Farah E. Shamout
405
4
0
03 Feb 2025
PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation
PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation
Mike Ranzinger
Jon Barker
Greg Heinrich
Pavlo Molchanov
Bryan Catanzaro
Andrew Tao
367
12
0
02 Oct 2024
Exploiting Student Parallelism for Efficient GPU Inference of BERT-like Models in Online Services
Exploiting Student Parallelism for Efficient GPU Inference of BERT-like Models in Online Services
Weiyan Wang
Yilun Jin
Yiming Zhang
Victor Junqiu Wei
Han Tian
Li Chen
Jinbao Xue
Yangyu Tao
Di Wang
Kai Chen
282
0
0
22 Aug 2024
UNIC: Universal Classification Models via Multi-teacher Distillation
UNIC: Universal Classification Models via Multi-teacher DistillationEuropean Conference on Computer Vision (ECCV), 2024
Mert Bulent Sariyildiz
Philippe Weinzaepfel
Thomas Lucas
Diane Larlus
Yannis Kalantidis
415
20
0
09 Aug 2024
FusionBench: A Unified Library and Comprehensive Benchmark for Deep Model Fusion
FusionBench: A Unified Library and Comprehensive Benchmark for Deep Model Fusion
Anke Tang
Li Shen
Yong Luo
Enneng Yang
Di Lin
Dacheng Tao
Bo Du
Dacheng Tao
ELMMoMeVLM
506
38
0
05 Jun 2024
E2GNN: Efficient Graph Neural Network Ensembles for Semi-Supervised
  Classification
E2GNN: Efficient Graph Neural Network Ensembles for Semi-Supervised Classification
Xin Zhang
Daochen Zha
Qiaoyu Tan
265
2
0
06 May 2024
Why does Knowledge Distillation Work? Rethink its Attention and Fidelity
  Mechanism
Why does Knowledge Distillation Work? Rethink its Attention and Fidelity Mechanism
Chenqi Guo
Shiwei Zhong
Xiaofeng Liu
Qianli Feng
Yinglong Ma
303
4
0
30 Apr 2024
ApproxDARTS: Differentiable Neural Architecture Search with Approximate
  Multipliers
ApproxDARTS: Differentiable Neural Architecture Search with Approximate Multipliers
Michal Pinos
Lukás Sekanina
Vojtěch Mrázek
MQ
218
3
0
08 Apr 2024
DeNetDM: Debiasing by Network Depth Modulation
DeNetDM: Debiasing by Network Depth Modulation
Silpa Vadakkeeveetil Sreelatha
Adarsh Kappiyath
Anjan Dutta
253
6
1
28 Mar 2024
Wisdom of Committee: Distilling from Foundation Model to Specialized
  Application Model
Wisdom of Committee: Distilling from Foundation Model to Specialized Application Model
Zichang Liu
Qingyun Liu
Yuening Li
Liang Liu
Anshumali Shrivastava
Shuchao Bi
Lichan Hong
Ed H. Chi
Zhe Zhao
VLM
257
7
0
21 Feb 2024
Knowledge Distillation on Spatial-Temporal Graph Convolutional Network
  for Traffic Prediction
Knowledge Distillation on Spatial-Temporal Graph Convolutional Network for Traffic PredictionInternational Journal of Computer Applications (IJCA), 2024
Mohammad Izadi
M. Safayani
Abdolreza Mirzaei
344
9
0
22 Jan 2024
AM-RADIO: Agglomerative Vision Foundation Model -- Reduce All Domains
  Into One
AM-RADIO: Agglomerative Vision Foundation Model -- Reduce All Domains Into One
Michael Ranzinger
Greg Heinrich
Jan Kautz
Pavlo Molchanov
VLM
894
140
0
10 Dec 2023
EdgeConvEns: Convolutional Ensemble Learning for Edge Intelligence
EdgeConvEns: Convolutional Ensemble Learning for Edge IntelligenceIEEE Access (IEEE Access), 2023
Ilkay Sikdokur
Inci M. Baytas
A. Yurdakul
FedML
236
0
0
25 Jul 2023
Multimodal Distillation for Egocentric Action Recognition
Multimodal Distillation for Egocentric Action RecognitionIEEE International Conference on Computer Vision (ICCV), 2023
Gorjan Radevski
Dusan Grujicic
Marie-Francine Moens
Matthew Blaschko
Tinne Tuytelaars
EgoV
394
38
0
14 Jul 2023
Structured Network Pruning by Measuring Filter-wise Interactions
Structured Network Pruning by Measuring Filter-wise Interactions
Wenting Tang
Xingxing Wei
Yue Liu
204
0
0
03 Jul 2023
EnSiam: Self-Supervised Learning With Ensemble Representations
EnSiam: Self-Supervised Learning With Ensemble Representations
Kai Han
Minsik Lee
SSL
317
0
0
22 May 2023
Distilling Robustness into Natural Language Inference Models with
  Domain-Targeted Augmentation
Distilling Robustness into Natural Language Inference Models with Domain-Targeted AugmentationAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Joe Stacey
Marek Rei
333
5
0
22 May 2023
FSNet: Redesign Self-Supervised MonoDepth for Full-Scale Depth
  Prediction for Autonomous Driving
FSNet: Redesign Self-Supervised MonoDepth for Full-Scale Depth Prediction for Autonomous DrivingIEEE Transactions on Automation Science and Engineering (IEEE TASE), 2023
Yuxuan Liu
Zhenhua Xu
Huaiyang Huang
Lujia Wang
Ming-Yuan Liu
MDE
284
8
0
21 Apr 2023
Knowledge Distillation for Efficient Sequences of Training Runs
Knowledge Distillation for Efficient Sequences of Training Runs
Xingyu Liu
A. Leonardi
Lu Yu
Chris Gilmer-Hill
Matthew L. Leavitt
Jonathan Frankle
109
4
0
11 Mar 2023
Instance-aware Model Ensemble With Distillation For Unsupervised Domain
  Adaptation
Instance-aware Model Ensemble With Distillation For Unsupervised Domain Adaptation
Weimin Wu
Jiayuan Fan
Tao Chen
Hancheng Ye
Bo Zhang
Baopu Li
110
4
0
15 Nov 2022
Reduce, Reuse, Recycle: Improving Training Efficiency with Distillation
Reduce, Reuse, Recycle: Improving Training Efficiency with Distillation
Cody Blakeney
Jessica Zosa Forde
Jonathan Frankle
Ziliang Zong
Matthew L. Leavitt
VLM
308
4
0
01 Nov 2022
End-to-end Ensemble-based Feature Selection for Paralinguistics Tasks
End-to-end Ensemble-based Feature Selection for Paralinguistics Tasks
Tamás Grósz
Mittul Singh
Sudarsana Reddy Kadiri
H. Kathania
M. Kurimo
129
0
0
28 Oct 2022
Federated Learning with Privacy-Preserving Ensemble Attention
  Distillation
Federated Learning with Privacy-Preserving Ensemble Attention DistillationIEEE Transactions on Medical Imaging (IEEE TMI), 2022
Xuan Gong
Liangchen Song
Rishi Vedula
Abhishek Sharma
Meng Zheng
...
Arun Innanje
Terrence Chen
Junsong Yuan
David Doermann
Ziyan Wu
FedML
278
42
0
16 Oct 2022
Label driven Knowledge Distillation for Federated Learning with non-IID
  Data
Label driven Knowledge Distillation for Federated Learning with non-IID Data
Minh-Duong Nguyen
Quoc-Viet Pham
D. Hoang
Long Tran-Thanh
Diep N. Nguyen
Won Joo Hwang
282
5
0
29 Sep 2022
Preserving Privacy in Federated Learning with Ensemble Cross-Domain
  Knowledge Distillation
Preserving Privacy in Federated Learning with Ensemble Cross-Domain Knowledge DistillationAAAI Conference on Artificial Intelligence (AAAI), 2022
Xuan Gong
Abhishek Sharma
Srikrishna Karanam
Ziyan Wu
Terrence Chen
David Doermann
Arun Innanje
FedML
220
109
0
10 Sep 2022
Enhancing Heterogeneous Federated Learning with Knowledge Extraction and
  Multi-Model Fusion
Enhancing Heterogeneous Federated Learning with Knowledge Extraction and Multi-Model Fusion
Duy Phuong Nguyen
Sixing Yu
J. P. Muñoz
Ali Jannesari
FedML
314
20
0
16 Aug 2022
Knowledge Distillation via Weighted Ensemble of Teaching Assistants
Knowledge Distillation via Weighted Ensemble of Teaching AssistantsInternational Conference on Big Knowledge (ICBK), 2021
Durga Prasad Ganta
Himel Das Gupta
Victor S. Sheng
140
7
0
23 Jun 2022
CDFKD-MFS: Collaborative Data-free Knowledge Distillation via
  Multi-level Feature Sharing
CDFKD-MFS: Collaborative Data-free Knowledge Distillation via Multi-level Feature SharingIEEE transactions on multimedia (IEEE TMM), 2022
Zhiwei Hao
Yong Luo
Zhi Wang
Han Hu
J. An
313
39
0
24 May 2022
ELODI: Ensemble Logit Difference Inhibition for Positive-Congruent
  Training
ELODI: Ensemble Logit Difference Inhibition for Positive-Congruent TrainingIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022
Yue Zhao
Yantao Shen
Yuanjun Xiong
Shuo Yang
Wei Xia
Zhuowen Tu
Bernt Shiele
Stefano Soatto
BDL
261
8
0
12 May 2022
Peer Collaborative Learning for Polyphonic Sound Event Detection
Peer Collaborative Learning for Polyphonic Sound Event Detection
Hayato Endo
Hiromitsu Nishizaki
140
4
0
07 Oct 2021
ParaDiS: Parallelly Distributable Slimmable Neural Networks
ParaDiS: Parallelly Distributable Slimmable Neural Networks
A. Ozerov
Anne Lambert
S. Kumaraswamy
UQCVMoE
297
1
0
06 Oct 2021
Boosting of Head Pose Estimation by Knowledge Distillation
Boosting of Head Pose Estimation by Knowledge Distillation
A. Sheka
V. Samun
129
0
0
20 Aug 2021
Students are the Best Teacher: Exit-Ensemble Distillation with
  Multi-Exits
Students are the Best Teacher: Exit-Ensemble Distillation with Multi-Exits
Hojung Lee
Jong-Seok Lee
197
12
0
01 Apr 2021
Embedded Knowledge Distillation in Depth-Level Dynamic Neural Network
Embedded Knowledge Distillation in Depth-Level Dynamic Neural Network
Qi Zhao
Shuchang Lyu
Zhiwei Zhang
Ting-Bing Xu
Guangliang Cheng
302
3
0
01 Mar 2021
A Comprehensive Survey on Hardware-Aware Neural Architecture Search
A Comprehensive Survey on Hardware-Aware Neural Architecture Search
Hadjer Benmeziane
Kaoutar El Maghraoui
Hamza Ouarnoughi
Smail Niar
Martin Wistuba
Naigang Wang
263
135
0
22 Jan 2021
ProxylessKD: Direct Knowledge Distillation with Inherited Classifier for
  Face Recognition
ProxylessKD: Direct Knowledge Distillation with Inherited Classifier for Face Recognition
W. Shi
Guanghui Ren
Yunpeng Chen
Shuicheng Yan
CVBM
171
7
0
31 Oct 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
2.1K
3,976
0
09 Jun 2020
1
Page 1 of 1