Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
1805.02641
Cited By
Label Refinery: Improving ImageNet Classification through Label Progression
7 May 2018
Hessam Bagherinezhad
Maxwell Horton
Mohammad Rastegari
Ali Farhadi
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Label Refinery: Improving ImageNet Classification through Label Progression"
50 / 113 papers shown
Few-Shot Learning with a Strong Teacher
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021
Han-Jia Ye
Lu Ming
De-Chuan Zhan
Wei-Lun Chao
304
71
0
01 Jul 2021
A Theory-Driven Self-Labeling Refinement Method for Contrastive Representation Learning
Neural Information Processing Systems (NeurIPS), 2021
Pan Zhou
Caiming Xiong
Xiaotong Yuan
Guosheng Lin
SSL
152
12
0
28 Jun 2021
Self-distillation with Batch Knowledge Ensembling Improves ImageNet Classification
Yixiao Ge
Xiao Zhang
Ching Lam Choi
Ka Chun Cheung
Peipei Zhao
Feng Zhu
Xiaogang Wang
Rui Zhao
Jiaming Song
FedML
UQCV
330
36
0
27 Apr 2021
Data-Efficient Language-Supervised Zero-Shot Learning with Self-Distillation
Rui Cheng
Bichen Wu
Peizhao Zhang
Peter Vajda
Joseph E. Gonzalez
CLIP
VLM
174
34
0
18 Apr 2021
Is Label Smoothing Truly Incompatible with Knowledge Distillation: An Empirical Study
International Conference on Learning Representations (ICLR), 2021
Zhiqiang Shen
Zechun Liu
Dejia Xu
Zitian Chen
Kwang-Ting Cheng
Marios Savvides
161
81
0
01 Apr 2021
Enhancing Segment-Based Speech Emotion Recognition by Deep Self-Learning
Shuiyang Mao
P. Ching
Tan Lee
120
2
0
30 Mar 2021
A New Training Framework for Deep Neural Network
Zhenyan Hou
Wenxuan Fan
FedML
298
2
0
12 Mar 2021
ISP Distillation
IEEE Open Journal of Signal Processing (JOSP), 2021
Eli Schwartz
A. Bronstein
Raja Giryes
VLM
321
8
0
25 Jan 2021
Self-Adaptive Training: Bridging Supervised and Self-Supervised Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021
Lang Huang
Chaoning Zhang
Hongyang R. Zhang
SSL
297
31
0
21 Jan 2021
Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks
International Conference on Learning Representations (ICLR), 2021
Yige Li
Lingjuan Lyu
Nodens Koren
X. Lyu
Yue Liu
Jiabo He
AAML
FedML
458
508
0
15 Jan 2021
Object Detection for Understanding Assembly Instruction Using Context-aware Data Augmentation and Cascade Mask R-CNN
J. Lee
S. Lee
S. Back
S. Shin
K. Lee
143
6
0
07 Jan 2021
Label Augmentation via Time-based Knowledge Distillation for Financial Anomaly Detection
Hongda Shen
E. Kursun
AAML
252
2
0
05 Jan 2021
Energy-constrained Self-training for Unsupervised Domain Adaptation
International Conference on Pattern Recognition (ICPR), 2021
Xiaofeng Liu
Bo Hu
Xiongchang Liu
Jun Lu
J. You
Lingsheng Kong
352
32
0
01 Jan 2021
Learning with Retrospection
AAAI Conference on Artificial Intelligence (AAAI), 2020
Xiang Deng
Zhongfei Zhang
114
20
0
24 Dec 2020
ISD: Self-Supervised Learning by Iterative Similarity Distillation
IEEE International Conference on Computer Vision (ICCV), 2020
Ajinkya Tejankar
Soroush Abbasi Koohpayegani
Vipin Pillai
Paolo Favaro
Hamed Pirsiavash
SSL
338
47
0
16 Dec 2020
Post-Hurricane Damage Assessment Using Satellite Imagery and Geolocation Features
Risk Analysis (Risk Anal.), 2020
Q. D. Cao
Youngjun Choe
162
10
0
15 Dec 2020
Self-Training for Class-Incremental Semantic Segmentation
IEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), 2020
Lu Yu
Xialei Liu
Joost van de Weijer
CLL
SSL
355
65
0
06 Dec 2020
Layer-Wise Data-Free CNN Compression
International Conference on Pattern Recognition (ICPR), 2020
Maxwell Horton
Yanzi Jin
Ali Farhadi
Mohammad Rastegari
MQ
241
19
0
18 Nov 2020
CompRess: Self-Supervised Learning by Compressing Representations
Neural Information Processing Systems (NeurIPS), 2020
Soroush Abbasi Koohpayegani
Ajinkya Tejankar
Hamed Pirsiavash
SSL
256
98
0
28 Oct 2020
A Data Set and a Convolutional Model for Iconography Classification in Paintings
Federico Milani
Piero Fraternali
315
59
0
06 Oct 2020
MetaDistiller: Network Self-Boosting via Meta-Learned Top-Down Distillation
European Conference on Computer Vision (ECCV), 2020
Benlin Liu
Yongming Rao
Jiwen Lu
Jie Zhou
Cho-Jui Hsieh
180
41
0
27 Aug 2020
GREEN: a Graph REsidual rE-ranking Network for Grading Diabetic Retinopathy
Shaoteng Liu
Lijun Gong
Kai Ma
Yefeng Zheng
MedIm
229
46
0
20 Jul 2020
Classes Matter: A Fine-grained Adversarial Approach to Cross-domain Semantic Segmentation
European Conference on Computer Vision (ECCV), 2020
Haoran Wang
T. Shen
Wei Zhang
Lingyu Duan
Tao Mei
194
324
0
17 Jul 2020
Differential Replication in Machine Learning
Irene Unceta
Jordi Nin
O. Pujol
SyDa
132
1
0
15 Jul 2020
Towards Practical Lipreading with Distilled and Efficient Models
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020
Pingchuan Ma
Brais Martínez
Stavros Petridis
Maja Pantic
320
107
0
13 Jul 2020
Learning to Learn Parameterized Classification Networks for Scalable Input Images
European Conference on Computer Vision (ECCV), 2020
Duo Li
Anbang Yao
Qifeng Chen
156
12
0
13 Jul 2020
Data-Efficient Ranking Distillation for Image Retrieval
Asian Conference on Computer Vision (ACCV), 2020
Zakaria Laskar
Arno Solin
VLM
177
4
0
10 Jul 2020
Robust Re-Identification by Multiple Views Knowledge Distillation
European Conference on Computer Vision (ECCV), 2020
Angelo Porrello
Luca Bergamini
Simone Calderara
211
72
0
08 Jul 2020
Multiple Expert Brainstorming for Domain Adaptive Person Re-identification
Yunpeng Zhai
QiXiang Ye
Shijian Lu
Mengxi Jia
Rongrong Ji
Yonghong Tian
275
187
0
03 Jul 2020
Extracurricular Learning: Knowledge Transfer Beyond Empirical Distribution
Hadi Pouransari
Mojan Javaheripi
Vinay Sharma
Oncel Tuzel
172
5
0
30 Jun 2020
Towards Understanding Label Smoothing
Yi Tian Xu
Yuanhong Xu
Qi Qian
Hao Li
Rong Jin
UQCV
183
47
0
20 Jun 2020
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
2.0K
3,768
0
09 Jun 2020
Keep off the Grass: Permissible Driving Routes from Radar with Weak Audio Supervision
David S. W. Williams
D. Martini
Matthew Gadd
Letizia Marchegiani
Paul Newman
175
12
0
11 May 2020
COLAM: Co-Learning of Deep Neural Networks and Soft Labels via Alternating Minimization
Neural Processing Letters (NPL), 2020
Xingjian Li
Haoyi Xiong
Haozhe An
Dejing Dou
Chengzhong Xu
FedML
110
3
0
26 Apr 2020
Regularizing Class-wise Predictions via Self-knowledge Distillation
Computer Vision and Pattern Recognition (CVPR), 2020
Sukmin Yun
Jongjin Park
Kimin Lee
Jinwoo Shin
254
324
0
31 Mar 2020
Circumventing Outliers of AutoAugment with Knowledge Distillation
European Conference on Computer Vision (ECCV), 2020
Longhui Wei
Anxiang Xiao
Lingxi Xie
Xin Chen
Xiaopeng Zhang
Qi Tian
160
66
0
25 Mar 2020
Synergic Adversarial Label Learning for Grading Retinal Diseases via Knowledge Distillation and Multi-task Learning
Lie Ju
Xin Wang
Xin Zhao
Huimin Lu
Dwarikanath Mahapatra
Paul Bonnington
Z. Ge
161
1
0
24 Mar 2020
Self-Adaptive Training: beyond Empirical Risk Minimization
Neural Information Processing Systems (NeurIPS), 2020
Lang Huang
Chaoning Zhang
Hongyang R. Zhang
NoLa
347
235
0
24 Feb 2020
Subclass Distillation
Rafael Müller
Simon Kornblith
Geoffrey E. Hinton
164
35
0
10 Feb 2020
Rethinking Curriculum Learning with Incremental Labels and Adaptive Compensation
British Machine Vision Conference (BMVC), 2019
Madan Ravi Ganesh
Jason J. Corso
ODL
251
10
0
13 Jan 2020
Least squares binary quantization of neural networks
Hadi Pouransari
Zhucheng Tu
Oncel Tuzel
MQ
282
36
0
09 Jan 2020
A simple baseline for domain adaptation using rotation prediction
Ajinkya Tejankar
Hamed Pirsiavash
SSL
130
5
0
26 Dec 2019
FQ-Conv: Fully Quantized Convolution for Efficient and Accurate Inference
Bram-Ernst Verhoef
Nathan Laubeuf
S. Cosemans
P. Debacker
Ioannis A. Papistas
A. Mallik
D. Verkest
MQ
197
16
0
19 Dec 2019
Preparing Lessons: Improve Knowledge Distillation with Better Supervision
Tiancheng Wen
Shenqi Lai
Xueming Qian
391
78
0
18 Nov 2019
Directional Adversarial Training for Cost Sensitive Deep Learning Classification Applications
Engineering applications of artificial intelligence (EAAI), 2019
M. Terzi
Gian Antonio Susto
Pratik Chaudhari
OOD
AAML
135
17
0
08 Oct 2019
Distillation
≈
\approx
≈
Early Stopping? Harvesting Dark Knowledge Utilizing Anisotropic Information Retrieval For Overparameterized Neural Network
Bin Dong
Jikai Hou
Yiping Lu
Zhihua Zhang
177
43
0
02 Oct 2019
Confidence Regularized Self-Training
IEEE International Conference on Computer Vision (ICCV), 2019
Yang Zou
Zhiding Yu
Xiaofeng Liu
B. Kumar
Jinsong Wang
693
877
0
26 Aug 2019
Adversarial-Based Knowledge Distillation for Multi-Model Ensemble and Noisy Data Refinement
Zhiqiang Shen
Zhankui He
Wanyun Cui
Jiahui Yu
Yutong Zheng
Chenchen Zhu
Marios Savvides
AAML
123
5
0
22 Aug 2019
Efficient Deep Neural Networks
Bichen Wu
137
12
0
20 Aug 2019
Adaptive Regularization of Labels
Qianggang Ding
Sifan Wu
Hao Sun
Jiadong Guo
Shutao Xia
ODL
127
33
0
15 Aug 2019
Previous
1
2
3
Next
Page 2 of 3