ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.04915
  4. Cited By
Progressive Network Grafting for Few-Shot Knowledge Distillation
v1v2 (latest)

Progressive Network Grafting for Few-Shot Knowledge Distillation

9 December 2020
Chengchao Shen
Xinchao Wang
Youtan Yin
Mingli Song
Sihui Luo
Xiuming Zhang
ArXiv (abs)PDFHTMLGithub (32★)

Papers citing "Progressive Network Grafting for Few-Shot Knowledge Distillation"

20 / 20 papers shown
Condensed Data Expansion Using Model Inversion for Knowledge Distillation
Condensed Data Expansion Using Model Inversion for Knowledge Distillation
Kuluhan Binici
Shivam Aggarwal
C. Acar
N. Pham
K. Leman
Gim Hee Lee
Tulika Mitra
340
1
0
25 Aug 2024
ELF-UA: Efficient Label-Free User Adaptation in Gaze Estimation
ELF-UA: Efficient Label-Free User Adaptation in Gaze Estimation
Yong Wu
Yang Wang
Sanqing Qu
Zhijun Li
Guang Chen
308
4
0
13 Jun 2024
Weight Copy and Low-Rank Adaptation for Few-Shot Distillation of Vision
  Transformers
Weight Copy and Low-Rank Adaptation for Few-Shot Distillation of Vision Transformers
Diana-Nicoleta Grigore
Mariana-Iuliana Georgescu
J. A. Justo
T. Johansen
Andreea-Iuliana Ionescu
Radu Tudor Ionescu
400
4
0
14 Apr 2024
Federated Distillation: A Survey
Federated Distillation: A Survey
Lin Li
Jianping Gou
Baosheng Yu
Lan Du
Zhang Yiand Dacheng Tao
DDFedML
417
28
0
02 Apr 2024
One-Step Forward and Backtrack: Overcoming Zig-Zagging in Loss-Aware
  Quantization Training
One-Step Forward and Backtrack: Overcoming Zig-Zagging in Loss-Aware Quantization Training
Lianbo Ma
Yuee Zhou
Jianlun Ma
Guo-Ding Yu
Qing Li
MQ
260
5
0
30 Jan 2024
StableKD: Breaking Inter-block Optimization Entanglement for Stable
  Knowledge Distillation
StableKD: Breaking Inter-block Optimization Entanglement for Stable Knowledge Distillation
Shiu-hong Kao
Jierun Chen
S.-H. Gary Chan
293
0
0
20 Dec 2023
Categories of Response-Based, Feature-Based, and Relation-Based
  Knowledge Distillation
Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation
Chuanguang Yang
Xinqiang Yu
Zhulin An
Yongjun Xu
VLMOffRL
611
39
0
19 Jun 2023
NORM: Knowledge Distillation via N-to-One Representation Matching
NORM: Knowledge Distillation via N-to-One Representation MatchingInternational Conference on Learning Representations (ICLR), 2023
Xiaolong Liu
Lujun Li
Chao Li
Anbang Yao
281
97
0
23 May 2023
Bespoke: A Block-Level Neural Network Optimization Framework for
  Low-Cost Deployment
Bespoke: A Block-Level Neural Network Optimization Framework for Low-Cost DeploymentAAAI Conference on Artificial Intelligence (AAAI), 2023
Jong-Ryul Lee
Yong-Hyuk Moon
239
0
0
03 Mar 2023
Large Language Models Are Reasoning Teachers
Large Language Models Are Reasoning TeachersAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Namgyu Ho
Laura Schmid
Se-Young Yun
ReLMELMLRM
421
475
0
20 Dec 2022
Deep Incubation: Training Large Models by Divide-and-Conquering
Deep Incubation: Training Large Models by Divide-and-ConqueringIEEE International Conference on Computer Vision (ICCV), 2022
Zanlin Ni
Yulin Wang
Jiangwei Yu
Haojun Jiang
Yu Cao
Gao Huang
VLM
292
13
0
08 Dec 2022
Few-Shot Learning of Compact Models via Task-Specific Meta Distillation
Few-Shot Learning of Compact Models via Task-Specific Meta DistillationIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2022
Yong Wu
Shekhor Chanda
M. Hosseinzadeh
Zhi Liu
Yang Wang
VLM
318
10
0
18 Oct 2022
Compressing Models with Few Samples: Mimicking then Replacing
Compressing Models with Few Samples: Mimicking then ReplacingComputer Vision and Pattern Recognition (CVPR), 2022
Huanyu Wang
Junjie Liu
Xin Ma
Yang Yong
Z. Chai
Jianxin Wu
VLMOffRL
171
15
0
07 Jan 2022
Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data
Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data
Gongfan Fang
Yifan Bao
Mingli Song
Xinchao Wang
Don Xie
Chengchao Shen
Xiuming Zhang
280
48
0
27 Oct 2021
Sparse Progressive Distillation: Resolving Overfitting under
  Pretrain-and-Finetune Paradigm
Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm
Shaoyi Huang
Dongkuan Xu
Ian En-Hsu Yen
Yijue Wang
Sung-En Chang
...
Shiyang Chen
Mimi Xie
Sanguthevar Rajasekaran
Hang Liu
Caiwen Ding
CLLVLM
351
38
0
15 Oct 2021
Meta-Aggregator: Learning to Aggregate for 1-bit Graph Neural Networks
Meta-Aggregator: Learning to Aggregate for 1-bit Graph Neural NetworksIEEE International Conference on Computer Vision (ICCV), 2021
Yongcheng Jing
Yiding Yang
Xinchao Wang
Xiuming Zhang
Dacheng Tao
241
46
0
27 Sep 2021
KDExplainer: A Task-oriented Attention Model for Explaining Knowledge
  Distillation
KDExplainer: A Task-oriented Attention Model for Explaining Knowledge DistillationInternational Joint Conference on Artificial Intelligence (IJCAI), 2021
Mengqi Xue
Mingli Song
Xinchao Wang
Pengcheng Chen
Xingen Wang
Xiuming Zhang
249
15
0
10 May 2021
Distilling Knowledge via Intermediate Classifiers
Distilling Knowledge via Intermediate Classifiers
Aryan Asadian
Amirali Salehi-Abari
197
1
0
28 Feb 2021
Training Generative Adversarial Networks in One Stage
Training Generative Adversarial Networks in One StageComputer Vision and Pattern Recognition (CVPR), 2021
Chengchao Shen
Youtan Yin
Xinchao Wang
Xubin Li
Mingli Song
Xiuming Zhang
GAN
485
14
0
28 Feb 2021
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
2.2K
4,015
0
09 Jun 2020
1
Page 1 of 1