Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2103.04062
Cited By
Adaptive Multi-Teacher Multi-level Knowledge Distillation
6 March 2021
Yuang Liu
Wei Zhang
Jun Wang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Adaptive Multi-Teacher Multi-level Knowledge Distillation"
18 / 18 papers shown
Title
Federated One-Shot Learning with Data Privacy and Objective-Hiding
Maximilian Egger
Rüdiger Urbanke
Rawad Bitar
FedML
54
0
0
29 Apr 2025
Advantage-Guided Distillation for Preference Alignment in Small Language Models
Shiping Gao
Fanqi Wan
Jiajian Guo
Xiaojun Quan
Qifan Wang
ALM
58
0
0
25 Feb 2025
Heuristic-Free Multi-Teacher Learning
Huy Thong Nguyen
En-Hung Chu
Lenord Melvix
Jazon Jiao
Chunglin Wen
Benjamin Louie
72
0
0
19 Nov 2024
PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation
Mike Ranzinger
Jon Barker
Greg Heinrich
Pavlo Molchanov
Bryan Catanzaro
Andrew Tao
35
4
0
02 Oct 2024
Leveraging Topological Guidance for Improved Knowledge Distillation
Eun Som Jeon
Rahul Khurana
Aishani Pathak
P. Turaga
49
0
0
07 Jul 2024
Topological Persistence Guided Knowledge Distillation for Wearable Sensor Data
Eun Som Jeon
Hongjun Choi
A. Shukla
Yuan Wang
Hyunglae Lee
M. Buman
P. Turaga
27
3
0
07 Jul 2024
Sentence-Level or Token-Level? A Comprehensive Study on Knowledge Distillation
Jingxuan Wei
Linzhuang Sun
Yichong Leng
Xu Tan
Bihui Yu
Ruifeng Guo
43
3
0
23 Apr 2024
AM-RADIO: Agglomerative Vision Foundation Model -- Reduce All Domains Into One
Michael Ranzinger
Greg Heinrich
Jan Kautz
Pavlo Molchanov
VLM
31
42
0
10 Dec 2023
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
21
16
0
08 Aug 2023
Learning When to Trust Which Teacher for Weakly Supervised ASR
Aakriti Agrawal
Milind Rao
Anit Kumar Sahu
Gopinath Chennupati
A. Stolcke
14
0
0
21 Jun 2023
Distillation from Heterogeneous Models for Top-K Recommendation
SeongKu Kang
Wonbin Kweon
Dongha Lee
Jianxun Lian
Xing Xie
Hwanjo Yu
VLM
27
21
0
02 Mar 2023
Teacher-Student Architecture for Knowledge Learning: A Survey
Chengming Hu
Xuan Li
Dan Liu
Xi Chen
Ju Wang
Xue Liu
20
35
0
28 Oct 2022
Label driven Knowledge Distillation for Federated Learning with non-IID Data
Minh-Duong Nguyen
Viet Quoc Pham
D. Hoang
Long Tran-Thanh
Diep N. Nguyen
W. Hwang
16
2
0
29 Sep 2022
Multimodal Adaptive Distillation for Leveraging Unimodal Encoders for Vision-Language Tasks
Zhecan Wang
Noel Codella
Yen-Chun Chen
Luowei Zhou
Xiyang Dai
...
Jianwei Yang
Haoxuan You
Kai-Wei Chang
Shih-Fu Chang
Lu Yuan
VLM
OffRL
23
22
0
22 Apr 2022
CLIP-TD: CLIP Targeted Distillation for Vision-Language Tasks
Zhecan Wang
Noel Codella
Yen-Chun Chen
Luowei Zhou
Jianwei Yang
Xiyang Dai
Bin Xiao
Haoxuan You
Shih-Fu Chang
Lu Yuan
CLIP
VLM
22
39
0
15 Jan 2022
Bringing AI To Edge: From Deep Learning's Perspective
Di Liu
Hao Kong
Xiangzhong Luo
Weichen Liu
Ravi Subramaniam
42
116
0
25 Nov 2020
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,835
0
09 Jun 2020
Neural Compatibility Modeling with Attentive Knowledge Distillation
Xuemeng Song
Fuli Feng
Xianjing Han
Xin Yang
W. Liu
Liqiang Nie
35
144
0
17 Apr 2018
1