ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.02641
  4. Cited By
Label Refinery: Improving ImageNet Classification through Label
  Progression

Label Refinery: Improving ImageNet Classification through Label Progression

7 May 2018
Hessam Bagherinezhad
Maxwell Horton
Mohammad Rastegari
Ali Farhadi
ArXiv (abs)PDFHTML

Papers citing "Label Refinery: Improving ImageNet Classification through Label Progression"

50 / 113 papers shown
Title
Few-Shot Learning with a Strong Teacher
Few-Shot Learning with a Strong TeacherIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021
Han-Jia Ye
Lu Ming
De-Chuan Zhan
Wei-Lun Chao
198
66
0
01 Jul 2021
A Theory-Driven Self-Labeling Refinement Method for Contrastive
  Representation Learning
A Theory-Driven Self-Labeling Refinement Method for Contrastive Representation LearningNeural Information Processing Systems (NeurIPS), 2021
Pan Zhou
Caiming Xiong
Xiaotong Yuan
Guosheng Lin
SSL
128
12
0
28 Jun 2021
Self-distillation with Batch Knowledge Ensembling Improves ImageNet
  Classification
Self-distillation with Batch Knowledge Ensembling Improves ImageNet Classification
Yixiao Ge
Xiao Zhang
Ching Lam Choi
Ka Chun Cheung
Peipei Zhao
Feng Zhu
Xiaogang Wang
Rui Zhao
Jiaming Song
FedMLUQCV
302
36
0
27 Apr 2021
Data-Efficient Language-Supervised Zero-Shot Learning with
  Self-Distillation
Data-Efficient Language-Supervised Zero-Shot Learning with Self-Distillation
Rui Cheng
Bichen Wu
Peizhao Zhang
Peter Vajda
Joseph E. Gonzalez
CLIPVLM
147
33
0
18 Apr 2021
Is Label Smoothing Truly Incompatible with Knowledge Distillation: An
  Empirical Study
Is Label Smoothing Truly Incompatible with Knowledge Distillation: An Empirical StudyInternational Conference on Learning Representations (ICLR), 2021
Zhiqiang Shen
Zechun Liu
Dejia Xu
Zitian Chen
Kwang-Ting Cheng
Marios Savvides
148
81
0
01 Apr 2021
Enhancing Segment-Based Speech Emotion Recognition by Deep Self-Learning
Enhancing Segment-Based Speech Emotion Recognition by Deep Self-Learning
Shuiyang Mao
P. Ching
Tan Lee
91
2
0
30 Mar 2021
A New Training Framework for Deep Neural Network
Zhenyan Hou
Wenxuan Fan
FedML
263
2
0
12 Mar 2021
ISP Distillation
ISP DistillationIEEE Open Journal of Signal Processing (JOSP), 2021
Eli Schwartz
A. Bronstein
Raja Giryes
VLM
278
8
0
25 Jan 2021
Self-Adaptive Training: Bridging Supervised and Self-Supervised Learning
Self-Adaptive Training: Bridging Supervised and Self-Supervised LearningIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021
Lang Huang
Chaoning Zhang
Hongyang R. Zhang
SSL
244
30
0
21 Jan 2021
Neural Attention Distillation: Erasing Backdoor Triggers from Deep
  Neural Networks
Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural NetworksInternational Conference on Learning Representations (ICLR), 2021
Yige Li
Lingjuan Lyu
Nodens Koren
X. Lyu
Yue Liu
Jiabo He
AAMLFedML
291
492
0
15 Jan 2021
Object Detection for Understanding Assembly Instruction Using
  Context-aware Data Augmentation and Cascade Mask R-CNN
Object Detection for Understanding Assembly Instruction Using Context-aware Data Augmentation and Cascade Mask R-CNN
J. Lee
S. Lee
S. Back
S. Shin
K. Lee
134
6
0
07 Jan 2021
Label Augmentation via Time-based Knowledge Distillation for Financial
  Anomaly Detection
Label Augmentation via Time-based Knowledge Distillation for Financial Anomaly Detection
Hongda Shen
E. Kursun
AAML
215
2
0
05 Jan 2021
Energy-constrained Self-training for Unsupervised Domain Adaptation
Energy-constrained Self-training for Unsupervised Domain AdaptationInternational Conference on Pattern Recognition (ICPR), 2021
Xiaofeng Liu
Bo Hu
Xiongchang Liu
Jun Lu
J. You
Lingsheng Kong
289
32
0
01 Jan 2021
Learning with Retrospection
Learning with RetrospectionAAAI Conference on Artificial Intelligence (AAAI), 2020
Xiang Deng
Zhongfei Zhang
107
20
0
24 Dec 2020
ISD: Self-Supervised Learning by Iterative Similarity Distillation
ISD: Self-Supervised Learning by Iterative Similarity DistillationIEEE International Conference on Computer Vision (ICCV), 2020
Ajinkya Tejankar
Soroush Abbasi Koohpayegani
Vipin Pillai
Paolo Favaro
Hamed Pirsiavash
SSL
277
47
0
16 Dec 2020
Post-Hurricane Damage Assessment Using Satellite Imagery and Geolocation
  Features
Post-Hurricane Damage Assessment Using Satellite Imagery and Geolocation FeaturesRisk Analysis (Risk Anal.), 2020
Q. D. Cao
Youngjun Choe
126
9
0
15 Dec 2020
Self-Training for Class-Incremental Semantic Segmentation
Self-Training for Class-Incremental Semantic SegmentationIEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), 2020
Lu Yu
Xialei Liu
Joost van de Weijer
CLLSSL
240
64
0
06 Dec 2020
Layer-Wise Data-Free CNN Compression
Layer-Wise Data-Free CNN CompressionInternational Conference on Pattern Recognition (ICPR), 2020
Maxwell Horton
Yanzi Jin
Ali Farhadi
Mohammad Rastegari
MQ
167
19
0
18 Nov 2020
CompRess: Self-Supervised Learning by Compressing Representations
CompRess: Self-Supervised Learning by Compressing RepresentationsNeural Information Processing Systems (NeurIPS), 2020
Soroush Abbasi Koohpayegani
Ajinkya Tejankar
Hamed Pirsiavash
SSL
223
97
0
28 Oct 2020
A Data Set and a Convolutional Model for Iconography Classification in
  Paintings
A Data Set and a Convolutional Model for Iconography Classification in Paintings
Federico Milani
Piero Fraternali
265
57
0
06 Oct 2020
MetaDistiller: Network Self-Boosting via Meta-Learned Top-Down
  Distillation
MetaDistiller: Network Self-Boosting via Meta-Learned Top-Down DistillationEuropean Conference on Computer Vision (ECCV), 2020
Benlin Liu
Yongming Rao
Jiwen Lu
Jie Zhou
Cho-Jui Hsieh
156
41
0
27 Aug 2020
GREEN: a Graph REsidual rE-ranking Network for Grading Diabetic
  Retinopathy
GREEN: a Graph REsidual rE-ranking Network for Grading Diabetic Retinopathy
Shaoteng Liu
Lijun Gong
Kai Ma
Yefeng Zheng
MedIm
209
44
0
20 Jul 2020
Classes Matter: A Fine-grained Adversarial Approach to Cross-domain
  Semantic Segmentation
Classes Matter: A Fine-grained Adversarial Approach to Cross-domain Semantic SegmentationEuropean Conference on Computer Vision (ECCV), 2020
Haoran Wang
T. Shen
Wei Zhang
Lingyu Duan
Tao Mei
155
317
0
17 Jul 2020
Differential Replication in Machine Learning
Differential Replication in Machine Learning
Irene Unceta
Jordi Nin
O. Pujol
SyDa
97
1
0
15 Jul 2020
Towards Practical Lipreading with Distilled and Efficient Models
Towards Practical Lipreading with Distilled and Efficient ModelsIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020
Pingchuan Ma
Brais Martínez
Stavros Petridis
Maja Pantic
223
107
0
13 Jul 2020
Learning to Learn Parameterized Classification Networks for Scalable
  Input Images
Learning to Learn Parameterized Classification Networks for Scalable Input ImagesEuropean Conference on Computer Vision (ECCV), 2020
Duo Li
Anbang Yao
Qifeng Chen
148
12
0
13 Jul 2020
Data-Efficient Ranking Distillation for Image Retrieval
Data-Efficient Ranking Distillation for Image RetrievalAsian Conference on Computer Vision (ACCV), 2020
Zakaria Laskar
Arno Solin
VLM
155
4
0
10 Jul 2020
Robust Re-Identification by Multiple Views Knowledge Distillation
Robust Re-Identification by Multiple Views Knowledge DistillationEuropean Conference on Computer Vision (ECCV), 2020
Angelo Porrello
Luca Bergamini
Simone Calderara
165
72
0
08 Jul 2020
Multiple Expert Brainstorming for Domain Adaptive Person
  Re-identification
Multiple Expert Brainstorming for Domain Adaptive Person Re-identification
Yunpeng Zhai
QiXiang Ye
Shijian Lu
Mengxi Jia
Rongrong Ji
Yonghong Tian
239
185
0
03 Jul 2020
Extracurricular Learning: Knowledge Transfer Beyond Empirical
  Distribution
Extracurricular Learning: Knowledge Transfer Beyond Empirical Distribution
Hadi Pouransari
Mojan Javaheripi
Vinay Sharma
Oncel Tuzel
126
5
0
30 Jun 2020
Towards Understanding Label Smoothing
Towards Understanding Label Smoothing
Yi Tian Xu
Yuanhong Xu
Qi Qian
Hao Li
Rong Jin
UQCV
135
45
0
20 Jun 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
1.4K
3,607
0
09 Jun 2020
Keep off the Grass: Permissible Driving Routes from Radar with Weak
  Audio Supervision
Keep off the Grass: Permissible Driving Routes from Radar with Weak Audio Supervision
David S. W. Williams
D. Martini
Matthew Gadd
Letizia Marchegiani
Paul Newman
128
12
0
11 May 2020
COLAM: Co-Learning of Deep Neural Networks and Soft Labels via
  Alternating Minimization
COLAM: Co-Learning of Deep Neural Networks and Soft Labels via Alternating MinimizationNeural Processing Letters (NPL), 2020
Xingjian Li
Haoyi Xiong
Haozhe An
Dejing Dou
Chengzhong Xu
FedML
109
3
0
26 Apr 2020
Regularizing Class-wise Predictions via Self-knowledge Distillation
Regularizing Class-wise Predictions via Self-knowledge DistillationComputer Vision and Pattern Recognition (CVPR), 2020
Sukmin Yun
Jongjin Park
Kimin Lee
Jinwoo Shin
209
319
0
31 Mar 2020
Circumventing Outliers of AutoAugment with Knowledge Distillation
Circumventing Outliers of AutoAugment with Knowledge DistillationEuropean Conference on Computer Vision (ECCV), 2020
Longhui Wei
Anxiang Xiao
Lingxi Xie
Xin Chen
Xiaopeng Zhang
Qi Tian
145
66
0
25 Mar 2020
Synergic Adversarial Label Learning for Grading Retinal Diseases via
  Knowledge Distillation and Multi-task Learning
Synergic Adversarial Label Learning for Grading Retinal Diseases via Knowledge Distillation and Multi-task Learning
Lie Ju
Xin Wang
Xin Zhao
Huimin Lu
Dwarikanath Mahapatra
Paul Bonnington
Z. Ge
146
1
0
24 Mar 2020
Self-Adaptive Training: beyond Empirical Risk Minimization
Self-Adaptive Training: beyond Empirical Risk MinimizationNeural Information Processing Systems (NeurIPS), 2020
Lang Huang
Chaoning Zhang
Hongyang R. Zhang
NoLa
287
225
0
24 Feb 2020
Subclass Distillation
Subclass Distillation
Rafael Müller
Simon Kornblith
Geoffrey E. Hinton
146
35
0
10 Feb 2020
Rethinking Curriculum Learning with Incremental Labels and Adaptive
  Compensation
Rethinking Curriculum Learning with Incremental Labels and Adaptive CompensationBritish Machine Vision Conference (BMVC), 2019
Madan Ravi Ganesh
Jason J. Corso
ODL
164
10
0
13 Jan 2020
Least squares binary quantization of neural networks
Least squares binary quantization of neural networks
Hadi Pouransari
Zhucheng Tu
Oncel Tuzel
MQ
200
36
0
09 Jan 2020
A simple baseline for domain adaptation using rotation prediction
A simple baseline for domain adaptation using rotation prediction
Ajinkya Tejankar
Hamed Pirsiavash
SSL
124
5
0
26 Dec 2019
FQ-Conv: Fully Quantized Convolution for Efficient and Accurate
  Inference
FQ-Conv: Fully Quantized Convolution for Efficient and Accurate Inference
Bram-Ernst Verhoef
Nathan Laubeuf
S. Cosemans
P. Debacker
Ioannis A. Papistas
A. Mallik
D. Verkest
MQ
177
16
0
19 Dec 2019
Preparing Lessons: Improve Knowledge Distillation with Better
  Supervision
Preparing Lessons: Improve Knowledge Distillation with Better Supervision
Tiancheng Wen
Shenqi Lai
Xueming Qian
356
75
0
18 Nov 2019
Directional Adversarial Training for Cost Sensitive Deep Learning
  Classification Applications
Directional Adversarial Training for Cost Sensitive Deep Learning Classification ApplicationsEngineering applications of artificial intelligence (EAAI), 2019
M. Terzi
Gian Antonio Susto
Pratik Chaudhari
OODAAML
122
17
0
08 Oct 2019
Distillation $\approx$ Early Stopping? Harvesting Dark Knowledge
  Utilizing Anisotropic Information Retrieval For Overparameterized Neural
  Network
Distillation ≈\approx≈ Early Stopping? Harvesting Dark Knowledge Utilizing Anisotropic Information Retrieval For Overparameterized Neural Network
Bin Dong
Jikai Hou
Yiping Lu
Zhihua Zhang
152
42
0
02 Oct 2019
Confidence Regularized Self-Training
Confidence Regularized Self-TrainingIEEE International Conference on Computer Vision (ICCV), 2019
Yang Zou
Zhiding Yu
Xiaofeng Liu
B. Kumar
Jinsong Wang
561
867
0
26 Aug 2019
Adversarial-Based Knowledge Distillation for Multi-Model Ensemble and
  Noisy Data Refinement
Adversarial-Based Knowledge Distillation for Multi-Model Ensemble and Noisy Data Refinement
Zhiqiang Shen
Zhankui He
Wanyun Cui
Jiahui Yu
Yutong Zheng
Chenchen Zhu
Marios Savvides
AAML
104
5
0
22 Aug 2019
Efficient Deep Neural Networks
Efficient Deep Neural Networks
Bichen Wu
135
12
0
20 Aug 2019
Adaptive Regularization of Labels
Adaptive Regularization of Labels
Qianggang Ding
Sifan Wu
Hao Sun
Jiadong Guo
Shutao Xia
ODL
114
32
0
15 Aug 2019
Previous
123
Next