Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2008.12094
Cited By
MetaDistiller: Network Self-Boosting via Meta-Learned Top-Down Distillation
27 August 2020
Benlin Liu
Yongming Rao
Jiwen Lu
Jie Zhou
Cho-Jui Hsieh
Re-assign community
ArXiv
PDF
HTML
Papers citing
"MetaDistiller: Network Self-Boosting via Meta-Learned Top-Down Distillation"
9 / 9 papers shown
Title
Learning Task-preferred Inference Routes for Gradient De-conflict in Multi-output DNNs
Yi Sun
Xin Xu
Jian Li
Xiaochang Hu
Yifei Shi
L. Zeng
37
2
0
31 May 2023
Few-Shot Learning of Compact Models via Task-Specific Meta Distillation
Yong Wu
Shekhor Chanda
M. Hosseinzadeh
Zhi Liu
Yang Wang
VLM
29
7
0
18 Oct 2022
LGD: Label-guided Self-distillation for Object Detection
Peizhen Zhang
Zijian Kang
Tong Yang
Xinming Zhang
N. Zheng
Jian Sun
ObjD
106
30
0
23 Sep 2021
Learning to Teach with Student Feedback
Yitao Liu
Tianxiang Sun
Xipeng Qiu
Xuanjing Huang
VLM
23
6
0
10 Sep 2021
DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification
Yongming Rao
Wenliang Zhao
Benlin Liu
Jiwen Lu
Jie Zhou
Cho-Jui Hsieh
ViT
34
670
0
03 Jun 2021
Towards Compact Single Image Super-Resolution via Contrastive Self-distillation
Yanbo Wang
Shaohui Lin
Yanyun Qu
Haiyan Wu
Zhizhong Zhang
Yuan Xie
Angela Yao
SupR
28
53
0
25 May 2021
HourNAS: Extremely Fast Neural Architecture Search Through an Hourglass Lens
Zhaohui Yang
Yunhe Wang
Xinghao Chen
Jianyuan Guo
Wei Zhang
Chao Xu
Chunjing Xu
Dacheng Tao
Chang Xu
39
17
0
29 May 2020
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
466
11,715
0
09 Mar 2017
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Zhuowen Tu
Kaiming He
312
10,233
0
16 Nov 2016
1