ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.10139
  4. Cited By
Rethinking Knowledge Distillation via Cross-Entropy

Rethinking Knowledge Distillation via Cross-Entropy

22 August 2022
Zhendong Yang
Zhe Li
Yuan Gong
Tianke Zhang
Shanshan Lao
Chun Yuan
Yu Li
ArXivPDFHTML

Papers citing "Rethinking Knowledge Distillation via Cross-Entropy"

10 / 10 papers shown
Title
Towards Robust and Efficient Cloud-Edge Elastic Model Adaptation via
  Selective Entropy Distillation
Towards Robust and Efficient Cloud-Edge Elastic Model Adaptation via Selective Entropy Distillation
Yaofo Chen
Shuaicheng Niu
Yaowei Wang
Shoukai Xu
Hengjie Song
Mingkui Tan
19
6
0
27 Feb 2024
Generalizable Heterogeneous Federated Cross-Correlation and Instance
  Similarity Learning
Generalizable Heterogeneous Federated Cross-Correlation and Instance Similarity Learning
Wenke Huang
J. J. Valero-Mas
Dasaem Jeong
Bo Du
FedML
19
44
0
28 Sep 2023
Bridging Cross-task Protocol Inconsistency for Distillation in Dense
  Object Detection
Bridging Cross-task Protocol Inconsistency for Distillation in Dense Object Detection
Longrong Yang
Xianpan Zhou
Xuewei Li
Liang Qiao
Zheyang Li
Zi-Liang Yang
Gaoang Wang
Xi Li
16
4
0
28 Aug 2023
Effective Whole-body Pose Estimation with Two-stages Distillation
Effective Whole-body Pose Estimation with Two-stages Distillation
Zhendong Yang
Ailing Zeng
Chun Yuan
Yu Li
14
151
0
29 Jul 2023
Are Large Kernels Better Teachers than Transformers for ConvNets?
Are Large Kernels Better Teachers than Transformers for ConvNets?
Tianjin Huang
Lu Yin
Zhenyu (Allen) Zhang
Lijuan Shen
Meng Fang
Mykola Pechenizkiy
Zhangyang Wang
Shiwei Liu
19
13
0
30 May 2023
Student-friendly Knowledge Distillation
Student-friendly Knowledge Distillation
Mengyang Yuan
Bo Lang
Fengnan Quan
8
17
0
18 May 2023
Generic-to-Specific Distillation of Masked Autoencoders
Generic-to-Specific Distillation of Masked Autoencoders
Wei Huang
Zhiliang Peng
Li Dong
Furu Wei
Jianbin Jiao
QiXiang Ye
10
22
0
28 Feb 2023
ViTKD: Practical Guidelines for ViT feature knowledge distillation
ViTKD: Practical Guidelines for ViT feature knowledge distillation
Zhendong Yang
Zhe Li
Ailing Zeng
Zexian Li
Chun Yuan
Yu Li
78
42
0
06 Sep 2022
Distilling Knowledge via Knowledge Review
Distilling Knowledge via Knowledge Review
Pengguang Chen
Shu-Lin Liu
Hengshuang Zhao
Jiaya Jia
147
416
0
19 Apr 2021
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
948
20,214
0
17 Apr 2017
1