Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2012.04915
Cited By
v1
v2 (latest)
Progressive Network Grafting for Few-Shot Knowledge Distillation
9 December 2020
Chengchao Shen
Xinchao Wang
Youtan Yin
Mingli Song
Sihui Luo
Xiuming Zhang
Re-assign community
ArXiv (abs)
PDF
HTML
Github (32★)
Papers citing
"Progressive Network Grafting for Few-Shot Knowledge Distillation"
20 / 20 papers shown
Title
Condensed Sample-Guided Model Inversion for Knowledge Distillation
Kuluhan Binici
Shivam Aggarwal
Cihan Acar
N. Pham
K. Leman
Gim Hee Lee
Tulika Mitra
86
1
0
25 Aug 2024
ELF-UA: Efficient Label-Free User Adaptation in Gaze Estimation
Yong Wu
Yang Wang
Sanqing Qu
Zhijun Li
Guang Chen
107
1
0
13 Jun 2024
Weight Copy and Low-Rank Adaptation for Few-Shot Distillation of Vision Transformers
Diana-Nicoleta Grigore
Mariana-Iuliana Georgescu
J. A. Justo
T. Johansen
Andreea-Iuliana Ionescu
Radu Tudor Ionescu
81
0
0
14 Apr 2024
Federated Distillation: A Survey
Lin Li
Jianping Gou
Baosheng Yu
Lan Du
Zhang Yiand Dacheng Tao
DD
FedML
118
8
0
02 Apr 2024
One-Step Forward and Backtrack: Overcoming Zig-Zagging in Loss-Aware Quantization Training
Lianbo Ma
Yuee Zhou
Jianlun Ma
Guo-Ding Yu
Qing Li
MQ
52
2
0
30 Jan 2024
StableKD: Breaking Inter-block Optimization Entanglement for Stable Knowledge Distillation
Shiu-hong Kao
Jierun Chen
S.-H. Gary Chan
73
0
0
20 Dec 2023
Categories of Response-Based, Feature-Based, and Relation-Based Knowledge Distillation
Chuanguang Yang
Xinqiang Yu
Zhulin An
Yongjun Xu
VLM
OffRL
191
26
0
19 Jun 2023
NORM: Knowledge Distillation via N-to-One Representation Matching
Xiaolong Liu
Lujun Li
Chao Li
Anbang Yao
113
71
0
23 May 2023
Bespoke: A Block-Level Neural Network Optimization Framework for Low-Cost Deployment
Jong-Ryul Lee
Yong-Hyuk Moon
63
0
0
03 Mar 2023
Large Language Models Are Reasoning Teachers
Namgyu Ho
Laura Schmid
Se-Young Yun
ReLM
ELM
LRM
135
351
0
20 Dec 2022
Deep Incubation: Training Large Models by Divide-and-Conquering
Zanlin Ni
Yulin Wang
Jiangwei Yu
Haojun Jiang
Yu Cao
Gao Huang
VLM
94
11
0
08 Dec 2022
Few-Shot Learning of Compact Models via Task-Specific Meta Distillation
Yong Wu
Shekhor Chanda
M. Hosseinzadeh
Zhi Liu
Yang Wang
VLM
105
8
0
18 Oct 2022
Compressing Models with Few Samples: Mimicking then Replacing
Huanyu Wang
Junjie Liu
Xin Ma
Yang Yong
Z. Chai
Jianxin Wu
VLM
OffRL
64
14
0
07 Jan 2022
Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data
Gongfan Fang
Yifan Bao
Mingli Song
Xinchao Wang
Don Xie
Chengchao Shen
Xiuming Zhang
97
44
0
27 Oct 2021
Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm
Shaoyi Huang
Dongkuan Xu
Ian En-Hsu Yen
Yijue Wang
Sung-En Chang
...
Shiyang Chen
Mimi Xie
Sanguthevar Rajasekaran
Hang Liu
Caiwen Ding
CLL
VLM
76
32
0
15 Oct 2021
Meta-Aggregator: Learning to Aggregate for 1-bit Graph Neural Networks
Yongcheng Jing
Yiding Yang
Xinchao Wang
Xiuming Zhang
Dacheng Tao
114
42
0
27 Sep 2021
KDExplainer: A Task-oriented Attention Model for Explaining Knowledge Distillation
Mengqi Xue
Mingli Song
Xinchao Wang
Ying Chen
Xingen Wang
Xiuming Zhang
55
10
0
10 May 2021
Distilling Knowledge via Intermediate Classifiers
Aryan Asadian
Amirali Salehi-Abari
53
1
0
28 Feb 2021
Training Generative Adversarial Networks in One Stage
Chengchao Shen
Youtan Yin
Xinchao Wang
Xubin Li
Mingli Song
Xiuming Zhang
GAN
111
13
0
28 Feb 2021
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
297
3,026
0
09 Jun 2020
1