ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.09074
  4. Cited By
Search to Distill: Pearls are Everywhere but not the Eyes

Search to Distill: Pearls are Everywhere but not the Eyes

20 November 2019
Yu Liu
Xuhui Jia
Mingxing Tan
Raviteja Vemulapalli
Yukun Zhu
Bradley Green
Xiaogang Wang
ArXivPDFHTML

Papers citing "Search to Distill: Pearls are Everywhere but not the Eyes"

40 / 40 papers shown
Title
Generalizing Teacher Networks for Effective Knowledge Distillation Across Student Architectures
Generalizing Teacher Networks for Effective Knowledge Distillation Across Student Architectures
Kuluhan Binici
Weiming Wu
Tulika Mitra
22
1
0
22 Jul 2024
Adaptive Teaching with Shared Classifier for Knowledge Distillation
Adaptive Teaching with Shared Classifier for Knowledge Distillation
Jaeyeon Jang
Young-Ik Kim
Jisu Lim
Hyeonseong Lee
19
0
0
12 Jun 2024
EdgeFM: Leveraging Foundation Model for Open-set Learning on the Edge
EdgeFM: Leveraging Foundation Model for Open-set Learning on the Edge
Bufang Yang
Lixing He
Neiwen Ling
Zhenyu Yan
Guoliang Xing
Xian Shuai
Xiaozhe Ren
Xin Jiang
43
20
0
18 Nov 2023
Group channel pruning and spatial attention distilling for object
  detection
Group channel pruning and spatial attention distilling for object detection
Yun Chu
Pu Li
Yong Bai
Zhuhua Hu
Yongqing Chen
Jiafeng Lu
VLM
24
13
0
02 Jun 2023
Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets
Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets
Hayeon Lee
Sohyun An
Minseon Kim
Sung Ju Hwang
OOD
13
5
0
26 May 2023
NORM: Knowledge Distillation via N-to-One Representation Matching
NORM: Knowledge Distillation via N-to-One Representation Matching
Xiaolong Liu
Lujun Li
Chao Li
Anbang Yao
41
68
0
23 May 2023
Visual Tuning
Visual Tuning
Bruce X. B. Yu
Jianlong Chang
Haixin Wang
Lin Liu
Shijie Wang
...
Lingxi Xie
Haojie Li
Zhouchen Lin
Qi Tian
Chang Wen Chen
VLM
41
38
0
10 May 2023
DisWOT: Student Architecture Search for Distillation WithOut Training
DisWOT: Student Architecture Search for Distillation WithOut Training
Peijie Dong
Lujun Li
Zimian Wei
35
56
0
28 Mar 2023
Neural Architecture Search for Effective Teacher-Student Knowledge
  Transfer in Language Models
Neural Architecture Search for Effective Teacher-Student Knowledge Transfer in Language Models
Aashka Trivedi
Takuma Udagawa
Michele Merler
Rameswar Panda
Yousef El-Kurdi
Bishwaranjan Bhattacharjee
22
6
0
16 Mar 2023
Improving Differentiable Architecture Search via Self-Distillation
Improving Differentiable Architecture Search via Self-Distillation
Xunyu Zhu
Jian Li
Yong Liu
Weiping Wang
19
7
0
11 Feb 2023
NAS-LID: Efficient Neural Architecture Search with Local Intrinsic
  Dimension
NAS-LID: Efficient Neural Architecture Search with Local Intrinsic Dimension
Xin He
Jiangchao Yao
Yuxin Wang
Zhenheng Tang
Ka Chu Cheung
Simon See
Bo Han
X. Chu
14
9
0
23 Nov 2022
Design Automation for Fast, Lightweight, and Effective Deep Learning
  Models: A Survey
Design Automation for Fast, Lightweight, and Effective Deep Learning Models: A Survey
Dalin Zhang
Kaixuan Chen
Yan Zhao
B. Yang
Li-Ping Yao
Christian S. Jensen
38
3
0
22 Aug 2022
Revisiting Architecture-aware Knowledge Distillation: Smaller Models and
  Faster Search
Revisiting Architecture-aware Knowledge Distillation: Smaller Models and Faster Search
Taehyeon Kim
Heesoo Myeong
Se-Young Yun
19
2
0
27 Jun 2022
DistPro: Searching A Fast Knowledge Distillation Process via Meta
  Optimization
DistPro: Searching A Fast Knowledge Distillation Process via Meta Optimization
XueQing Deng
Dawei Sun
Shawn D. Newsam
Peng Wang
6
8
0
12 Apr 2022
A Novel Architecture Slimming Method for Network Pruning and Knowledge
  Distillation
A Novel Architecture Slimming Method for Network Pruning and Knowledge Distillation
Dongqi Wang
Shengyu Zhang
Zhipeng Di
Xin Lin
Weihua Zhou
Fei Wu
21
0
0
21 Feb 2022
AUTOKD: Automatic Knowledge Distillation Into A Student Architecture
  Family
AUTOKD: Automatic Knowledge Distillation Into A Student Architecture Family
Roy Henha Eyono
Fabio Maria Carlucci
P. Esperança
Binxin Ru
Phillip Torr
19
3
0
05 Nov 2021
Light-weight Deformable Registration using Adversarial Learning with
  Distilling Knowledge
Light-weight Deformable Registration using Adversarial Learning with Distilling Knowledge
M. Tran
Tuong Khanh Long Do
Huy Tran
Erman Tjiputra
Quang-Dieu Tran
Anh Nguyen
MedIm
19
25
0
04 Oct 2021
LANA: Latency Aware Network Acceleration
LANA: Latency Aware Network Acceleration
Pavlo Molchanov
Jimmy Hall
Hongxu Yin
Jan Kautz
Nicolò Fusi
Arash Vahdat
15
11
0
12 Jul 2021
Mutually-aware Sub-Graphs Differentiable Architecture Search
Mutually-aware Sub-Graphs Differentiable Architecture Search
Hao Tan
Sheng Guo
Yujie Zhong
Matthew R. Scott
Weilin Huang
17
2
0
09 Jul 2021
A Light-weight Deep Human Activity Recognition Algorithm Using
  Multi-knowledge Distillation
A Light-weight Deep Human Activity Recognition Algorithm Using Multi-knowledge Distillation
Runze Chen
Haiyong Luo
Fang Zhao
Xuechun Meng
Zhiqing Xie
Yida Zhu
VLM
HAI
17
2
0
06 Jul 2021
Joint-DetNAS: Upgrade Your Detector with NAS, Pruning and Dynamic
  Distillation
Joint-DetNAS: Upgrade Your Detector with NAS, Pruning and Dynamic Distillation
Lewei Yao
Renjie Pi
Hang Xu
Wei Zhang
Zhenguo Li
Tong Zhang
81
38
0
27 May 2021
FNAS: Uncertainty-Aware Fast Neural Architecture Search
FNAS: Uncertainty-Aware Fast Neural Architecture Search
Jihao Liu
Ming Zhang
Yangting Sun
B. Liu
Guanglu Song
Yu Liu
Hongsheng Li
26
7
0
25 May 2021
Compatibility-aware Heterogeneous Visual Search
Compatibility-aware Heterogeneous Visual Search
Rahul Duggal
Hao Zhou
Shuo Yang
Yuanjun Xiong
W. Xia
Z. Tu
Stefano Soatto
31
24
0
13 May 2021
Distilling a Powerful Student Model via Online Knowledge Distillation
Distilling a Powerful Student Model via Online Knowledge Distillation
Shaojie Li
Mingbao Lin
Yan Wang
Yongjian Wu
Yonghong Tian
Ling Shao
Rongrong Ji
FedML
25
46
0
26 Mar 2021
Towards Personalized Federated Learning
Towards Personalized Federated Learning
A. Tan
Han Yu
Li-zhen Cui
Qiang Yang
FedML
AI4CE
194
840
0
01 Mar 2021
Effective Model Compression via Stage-wise Pruning
Effective Model Compression via Stage-wise Pruning
Mingyang Zhang
Xinyi Yu
Jingtao Rong
L. Ou
SyDa
8
1
0
10 Nov 2020
Cream of the Crop: Distilling Prioritized Paths For One-Shot Neural
  Architecture Search
Cream of the Crop: Distilling Prioritized Paths For One-Shot Neural Architecture Search
Houwen Peng
Hao Du
Hongyuan Yu
Qi Li
Jing Liao
Jianlong Fu
14
67
0
29 Oct 2020
Neighbourhood Distillation: On the benefits of non end-to-end
  distillation
Neighbourhood Distillation: On the benefits of non end-to-end distillation
Laetitia Shao
Max Moroz
Elad Eban
Yair Movshovitz-Attias
ODL
11
0
0
02 Oct 2020
From Federated Learning to Federated Neural Architecture Search: A
  Survey
From Federated Learning to Federated Neural Architecture Search: A Survey
Hangyu Zhu
Haoyu Zhang
Yaochu Jin
FedML
OOD
AI4CE
14
146
0
12 Sep 2020
Weight-Sharing Neural Architecture Search: A Battle to Shrink the
  Optimization Gap
Weight-Sharing Neural Architecture Search: A Battle to Shrink the Optimization Gap
Lingxi Xie
Xin Chen
Kaifeng Bi
Longhui Wei
Yuhui Xu
...
Lanfei Wang
Anxiang Xiao
Jianlong Chang
Xiaopeng Zhang
Qi Tian
ViT
33
108
0
04 Aug 2020
Towards Practical Lipreading with Distilled and Efficient Models
Towards Practical Lipreading with Distilled and Efficient Models
Pingchuan Ma
Brais Martínez
Stavros Petridis
M. Pantic
13
95
0
13 Jul 2020
On the Demystification of Knowledge Distillation: A Residual Network
  Perspective
On the Demystification of Knowledge Distillation: A Residual Network Perspective
N. Jha
Rajat Saini
Sparsh Mittal
11
4
0
30 Jun 2020
Cyclic Differentiable Architecture Search
Cyclic Differentiable Architecture Search
Hongyuan Yu
Houwen Peng
Yan Huang
Jianlong Fu
Hao Du
Liang Wang
Haibin Ling
3DPC
19
48
0
18 Jun 2020
Multi-fidelity Neural Architecture Search with Knowledge Distillation
Multi-fidelity Neural Architecture Search with Knowledge Distillation
I. Trofimov
Nikita Klyuchnikov
Mikhail Salnikov
Alexander N. Filippov
Evgeny Burnaev
27
15
0
15 Jun 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
19
2,835
0
09 Jun 2020
Why distillation helps: a statistical perspective
Why distillation helps: a statistical perspective
A. Menon
A. S. Rawat
Sashank J. Reddi
Seungyeon Kim
Sanjiv Kumar
FedML
17
22
0
21 May 2020
Structured Sparsification with Joint Optimization of Group Convolution
  and Channel Shuffle
Structured Sparsification with Joint Optimization of Group Convolution and Channel Shuffle
Xinyu Zhang
Kai Zhao
Taihong Xiao
Mingg-Ming Cheng
Ming-Hsuan Yang
12
1
0
19 Feb 2020
Transfer Heterogeneous Knowledge Among Peer-to-Peer Teammates: A Model
  Distillation Approach
Transfer Heterogeneous Knowledge Among Peer-to-Peer Teammates: A Model Distillation Approach
Zeyue Xue
Shuang Luo
Chao-Xiang Wu
Pan Zhou
Kaigui Bian
Wei Du
6
4
0
06 Feb 2020
Neural Architecture Search with Reinforcement Learning
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
264
5,326
0
05 Nov 2016
Google's Neural Machine Translation System: Bridging the Gap between
  Human and Machine Translation
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,743
0
26 Sep 2016
1