Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2405.13078
Cited By
Exploring Dark Knowledge under Various Teacher Capacities and Addressing Capacity Mismatch
21 May 2024
Xin-Chun Li
Wen-Shu Fan
Bowen Tao
Le Gan
De-Chuan Zhan
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Exploring Dark Knowledge under Various Teacher Capacities and Addressing Capacity Mismatch"
6 / 6 papers shown
Title
Simple Semi-supervised Knowledge Distillation from Vision-Language Models via
D
\mathbf{\texttt{D}}
D
ual-
H
\mathbf{\texttt{H}}
H
ead
O
\mathbf{\texttt{O}}
O
ptimization
Seongjae Kang
Dong Bok Lee
Hyungjoon Jang
Sung Ju Hwang
VLM
46
0
0
12 May 2025
Efficient and Robust Knowledge Distillation from A Stronger Teacher Based on Correlation Matching
Wenqi Niu
Yingchao Wang
Guohui Cai
Hanpo Hou
24
0
0
09 Oct 2024
Asymmetric Temperature Scaling Makes Larger Networks Teach Well Again
Xin-Chun Li
Wenxuan Fan
Shaoming Song
Yinchuan Li
Bingshuai Li
Yunfeng Shao
De-Chuan Zhan
34
30
0
10 Oct 2022
Knowledge Distillation in Wide Neural Networks: Risk Bound, Data Efficiency and Imperfect Teacher
Guangda Ji
Zhanxing Zhu
51
42
0
20 Oct 2020
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
948
20,549
0
17 Apr 2017
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
288
10,214
0
16 Nov 2016
1