ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.19444
  4. Cited By
One-for-All: Bridge the Gap Between Heterogeneous Architectures in
  Knowledge Distillation

One-for-All: Bridge the Gap Between Heterogeneous Architectures in Knowledge Distillation

30 October 2023
Zhiwei Hao
Jianyuan Guo
Kai Han
Yehui Tang
Han Hu
Yunhe Wang
Chang Xu
ArXivPDFHTML

Papers citing "One-for-All: Bridge the Gap Between Heterogeneous Architectures in Knowledge Distillation"

19 / 19 papers shown
Title
Image Recognition with Online Lightweight Vision Transformer: A Survey
Image Recognition with Online Lightweight Vision Transformer: A Survey
Zherui Zhang
Rongtao Xu
Jie Zhou
Changwei Wang
Xingtian Pei
...
Jiguang Zhang
Li Guo
Longxiang Gao
W. Xu
Shibiao Xu
ViT
39
0
0
06 May 2025
Uncertainty-Aware Multi-Expert Knowledge Distillation for Imbalanced Disease Grading
Uncertainty-Aware Multi-Expert Knowledge Distillation for Imbalanced Disease Grading
Shuo Tong
Shangde Gao
Ke Liu
Zihang Huang
Hongxia Xu
Haochao Ying
J. Wu
24
0
0
01 May 2025
Jointly Understand Your Command and Intention:Reciprocal Co-Evolution between Scene-Aware 3D Human Motion Synthesis and Analysis
Jointly Understand Your Command and Intention:Reciprocal Co-Evolution between Scene-Aware 3D Human Motion Synthesis and Analysis
Xuehao Gao
Yang Yang
Shaoyi Du
Guo-Jun Qi
Junwei Han
34
1
0
01 Mar 2025
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
Classroom-Inspired Multi-Mentor Distillation with Adaptive Learning Strategies
Shalini Sarode
Muhammad Saif Ullah Khan
Tahira Shehzadi
Didier Stricker
Muhammad Zeshan Afzal
29
0
0
30 Sep 2024
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
Qianlong Xiang
Miao Zhang
Yuzhang Shang
Jianlong Wu
Yan Yan
Liqiang Nie
DiffM
45
8
0
05 Sep 2024
Overcoming Uncertain Incompleteness for Robust Multimodal Sequential Diagnosis Prediction via Curriculum Data Erasing Guided Knowledge Distillation
Overcoming Uncertain Incompleteness for Robust Multimodal Sequential Diagnosis Prediction via Curriculum Data Erasing Guided Knowledge Distillation
Heejoon Koo
33
0
0
28 Jul 2024
Cross-Architecture Auxiliary Feature Space Translation for Efficient
  Few-Shot Personalized Object Detection
Cross-Architecture Auxiliary Feature Space Translation for Efficient Few-Shot Personalized Object Detection
F. Barbato
Umberto Michieli
J. Moon
Pietro Zanuttigh
Mete Ozay
30
2
0
01 Jul 2024
ReDistill: Residual Encoded Distillation for Peak Memory Reduction of CNNs
ReDistill: Residual Encoded Distillation for Peak Memory Reduction of CNNs
Fang Chen
Gourav Datta
Mujahid Al Rafi
Hyeran Jeon
Meng Tang
88
1
0
06 Jun 2024
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
CKD: Contrastive Knowledge Distillation from A Sample-wise Perspective
Wencheng Zhu
Xin Zhou
Pengfei Zhu
Yu Wang
Qinghua Hu
VLM
41
1
0
22 Apr 2024
Distilling Morphology-Conditioned Hypernetworks for Efficient Universal
  Morphology Control
Distilling Morphology-Conditioned Hypernetworks for Efficient Universal Morphology Control
Zheng Xiong
Risto Vuorio
Jacob Beck
Matthieu Zimmer
Kun Shao
Shimon Whiteson
22
1
0
09 Feb 2024
Dataset Distillation via Factorization
Dataset Distillation via Factorization
Songhua Liu
Kai Wang
Xingyi Yang
Jingwen Ye
Xinchao Wang
DD
124
137
0
30 Oct 2022
CDFKD-MFS: Collaborative Data-free Knowledge Distillation via
  Multi-level Feature Sharing
CDFKD-MFS: Collaborative Data-free Knowledge Distillation via Multi-level Feature Sharing
Zhiwei Hao
Yong Luo
Zhi Wang
Han Hu
J. An
29
25
0
24 May 2022
CMT: Convolutional Neural Networks Meet Vision Transformers
CMT: Convolutional Neural Networks Meet Vision Transformers
Jianyuan Guo
Kai Han
Han Wu
Yehui Tang
Chunjing Xu
Yunhe Wang
Chang Xu
ViT
323
500
0
13 Jul 2021
MLP-Mixer: An all-MLP Architecture for Vision
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
239
2,554
0
04 May 2021
Distilling Knowledge via Knowledge Review
Distilling Knowledge via Knowledge Review
Pengguang Chen
Shu-Lin Liu
Hengshuang Zhao
Jiaya Jia
144
308
0
19 Apr 2021
Transformer in Transformer
Transformer in Transformer
Kai Han
An Xiao
Enhua Wu
Jianyuan Guo
Chunjing Xu
Yunhe Wang
ViT
279
1,490
0
27 Feb 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
  without Convolutions
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
263
3,538
0
24 Feb 2021
Distilling portable Generative Adversarial Networks for Image
  Translation
Distilling portable Generative Adversarial Networks for Image Translation
Hanting Chen
Yunhe Wang
Han Shu
Changyuan Wen
Chunjing Xu
Boxin Shi
Chao Xu
Chang Xu
66
83
0
07 Mar 2020
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
187
436
0
12 Jun 2018
1