ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.13960
  4. Cited By
Neural Networks Are More Productive Teachers Than Human Raters: Active
  Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model

Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model

31 March 2020
Dongdong Wang
Yandong Li
Liqiang Wang
Boqing Gong
ArXivPDFHTML

Papers citing "Neural Networks Are More Productive Teachers Than Human Raters: Active Mixup for Data-Efficient Knowledge Distillation from a Blackbox Model"

4 / 4 papers shown
Title
ERSAM: Neural Architecture Search For Energy-Efficient and Real-Time Social Ambiance Measurement
ERSAM: Neural Architecture Search For Energy-Efficient and Real-Time Social Ambiance Measurement
Chaojian Li
Wenwan Chen
Jiayi Yuan
Yingyan Lin
Ashutosh Sabharwal
23
0
0
19 Mar 2023
Isotonic Data Augmentation for Knowledge Distillation
Isotonic Data Augmentation for Knowledge Distillation
Wanyun Cui
Sen Yan
8
6
0
03 Jul 2021
Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Guodong Xu
Ziwei Liu
Chen Change Loy
UQCV
21
39
0
17 Dec 2020
Relation Distillation Networks for Video Object Detection
Relation Distillation Networks for Video Object Detection
Jiajun Deng
Yingwei Pan
Ting Yao
Wen-gang Zhou
Houqiang Li
Tao Mei
ObjD
95
191
0
26 Aug 2019
1