Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2003.13960
Cited By
Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model
31 March 2020
Dongdong Wang
Yandong Li
Liqiang Wang
Boqing Gong
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model"
4 / 4 papers shown
Title
ERSAM: Neural Architecture Search For Energy-Efficient and Real-Time Social Ambiance Measurement
Chaojian Li
Wenwan Chen
Jiayi Yuan
Yingyan Lin
Ashutosh Sabharwal
23
0
0
19 Mar 2023
Isotonic Data Augmentation for Knowledge Distillation
Wanyun Cui
Sen Yan
8
6
0
03 Jul 2021
Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Guodong Xu
Ziwei Liu
Chen Change Loy
UQCV
21
39
0
17 Dec 2020
Relation Distillation Networks for Video Object Detection
Jiajun Deng
Yingwei Pan
Ting Yao
Wen-gang Zhou
Houqiang Li
Tao Mei
ObjD
95
191
0
26 Aug 2019
1