Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
1805.02641
Cited By
Label Refinery: Improving ImageNet Classification through Label Progression
7 May 2018
Hessam Bagherinezhad
Maxwell Horton
Mohammad Rastegari
Ali Farhadi
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Label Refinery: Improving ImageNet Classification through Label Progression"
50 / 113 papers shown
Title
NT-ML: Backdoor Defense via Non-target Label Training and Mutual Learning
Wenjie Huo
Katinka Wolter
AAML
72
0
0
07 Aug 2025
Combined Image Data Augmentations diminish the benefits of Adaptive Label Smoothing
Georg Siedel
Ekagra Gupta
Weijia Shao
S. Vock
Andrey Morozov
86
0
0
22 Jul 2025
Sketch Down the FLOPs: Towards Efficient Networks for Human Sketch
Computer Vision and Pattern Recognition (CVPR), 2025
Aneeshan Sain
Subhajit Maity
Pinaki Nath Chowdhury
Subhadeep Koley
A. Bhunia
Yi-Zhe Song
3DH
216
0
0
29 May 2025
KV Prediction for Improved Time to First Token
Maxwell Horton
Qingqing Cao
Chenfan Sun
Yanzi Jin
Sachin Mehta
Mohammad Rastegari
Moin Nabi
AI4TS
167
7
0
10 Oct 2024
Exploration of Class Center for Fine-Grained Visual Classification
Hang Yao
Qiguang Miao
Peipei Zhao
Chaoneng Li
Xin Li
Guanwen Feng
Ruyi Liu
VLM
169
11
0
05 Jul 2024
DistilDoc: Knowledge Distillation for Visually-Rich Document Applications
Jordy Van Landeghem
Subhajit Maity
Ayan Banerjee
Matthew Blaschko
Marie-Francine Moens
Josep Lladós
Sanket Biswas
282
3
0
12 Jun 2024
Edge-guided and Class-balanced Active Learning for Semantic Segmentation of Aerial Images
Lianlei Shan
Weiqiang Wang
Ke Lv
Bin Luo
VLM
255
10
0
28 May 2024
Two-Stage Multi-task Self-Supervised Learning for Medical Image Segmentation
Binyan Hu
•. A. K. Qin
SSL
103
0
0
11 Feb 2024
CPR++: Object Localization via Single Coarse Point Supervision
Xuehui Yu
Pengfei Chen
Kuiran Wang
Xumeng Han
Guorong Li
Zhenjun Han
QiXiang Ye
Jianbin Jiao
134
4
0
30 Jan 2024
Temporal Knowledge Distillation for Time-Sensitive Financial Services Applications
Hongda Shen
Eren Kurshan
AAML
144
3
0
28 Dec 2023
Expediting Contrastive Language-Image Pretraining via Self-distilled Encoders
Bumsoo Kim
Jinhyung Kim
Yeonsik Jo
S. Kim
VLM
264
5
0
19 Dec 2023
Dual-Stream Knowledge-Preserving Hashing for Unsupervised Video Retrieval
European Conference on Computer Vision (ECCV), 2023
P. Li
Hongtao Xie
Jiannan Ge
Lei Zhang
Shaobo Min
Yongdong Zhang
134
21
0
12 Oct 2023
Teacher-Student Architecture for Knowledge Distillation: A Survey
Chengming Hu
Xuan Li
Danyang Liu
Haolun Wu
Xi Chen
Ju Wang
Xue Liu
228
38
0
08 Aug 2023
Perception and Semantic Aware Regularization for Sequential Confidence Calibration
Computer Vision and Pattern Recognition (CVPR), 2023
Zhenghua Peng
Yuanmao Luo
Tianshui Chen
Keke Xu
Shuangping Huang
AI4TS
217
3
0
31 May 2023
TaxoKnow: Taxonomy as Prior Knowledge in the Loss Function of Multi-class Classification
Mohsen Pourvali
Yao Meng
Chen Sheng
Yangzhou Du
113
0
0
24 May 2023
MSVQ: Self-Supervised Learning with Multiple Sample Views and Queues
Knowledge-Based Systems (KBS), 2023
Chengwei Peng
Xianzhong Long
Yun Li
SSL
200
2
0
09 May 2023
Physical Knowledge Enhanced Deep Neural Network for Sea Surface Temperature Prediction
IEEE Transactions on Geoscience and Remote Sensing (TGRS), 2023
Yuxin Meng
Feng Gao
Eric Rigall
Ran Dong
Junyu Dong
Q. Du
133
36
0
19 Apr 2023
Mixed-Type Wafer Classification For Low Memory Devices Using Knowledge Distillation
Nitish Shukla
Anurima Dey
K. Srivatsan
171
1
0
24 Mar 2023
Exploiting Unlabelled Photos for Stronger Fine-Grained SBIR
Computer Vision and Pattern Recognition (CVPR), 2023
Aneeshan Sain
A. Bhunia
Subhadeep Koley
Pinaki Nath Chowdhury
Soumitri Chattopadhyay
Tao Xiang
Yi-Zhe Song
256
24
0
24 Mar 2023
Knowledge Distillation from Single to Multi Labels: an Empirical Study
Youcai Zhang
Yuzhuo Qin
Heng-Ye Liu
Yanhao Zhang
Yaqian Li
X. Gu
VLM
160
2
0
15 Mar 2023
A Scalable and Efficient Iterative Method for Copying Machine Learning Classifiers
Journal of machine learning research (JMLR), 2023
N. Statuto
Irene Unceta
Jordi Nin
O. Pujol
178
0
0
06 Feb 2023
Structured Knowledge Distillation Towards Efficient and Compact Multi-View 3D Detection
Linfeng Zhang
Yukang Shi
Hung-Shuo Tai
Zhipeng Zhang
Yuan He
Ke Wang
Kaisheng Ma
208
3
0
14 Nov 2022
Teacher-Student Architecture for Knowledge Learning: A Survey
Chengming Hu
Xuan Li
Dan Liu
Xi Chen
Ju Wang
Xue Liu
242
41
0
28 Oct 2022
Constraining Pseudo-label in Self-training Unsupervised Domain Adaptation with Energy-based Model
International Journal of Intelligent Systems (IJIS), 2022
Lingsheng Kong
Bo Hu
Xiongchang Liu
Jun Lu
Jane You
Xiaofeng Liu
211
13
0
26 Aug 2022
Efficient One Pass Self-distillation with Zipf's Label Smoothing
European Conference on Computer Vision (ECCV), 2022
Jiajun Liang
Linze Li
Z. Bing
Borui Zhao
Yao Tang
Bo Lin
Haoqiang Fan
111
23
0
26 Jul 2022
Federated Semi-Supervised Domain Adaptation via Knowledge Transfer
Madhureeta Das
Xianhao Chen
Xiaoyong Yuan
Lan Zhang
131
2
0
21 Jul 2022
Is one annotation enough? A data-centric image classification benchmark for noisy and ambiguous label estimation
Neural Information Processing Systems (NeurIPS), 2022
Lars Schmarje
Vasco Grossmann
Claudius Zelenka
S. Dippel
R. Kiko
...
M. Pastell
J. Stracke
A. Valros
N. Volkmann
Reinahrd Koch
319
41
0
13 Jul 2022
Complementary Bi-directional Feature Compression for Indoor 360° Semantic Segmentation with Self-distillation
IEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2022
Zishuo Zheng
Chunyu Lin
Lang Nie
K. Liao
Zhijie Shen
Yao Zhao
MDE
138
19
0
06 Jul 2022
Boosting the Adversarial Transferability of Surrogate Models with Dark Knowledge
IEEE International Conference on Tools with Artificial Intelligence (ICTAI), 2022
Dingcheng Yang
Zihao Xiao
Wenjian Yu
AAML
168
12
0
16 Jun 2022
A Survey of Automated Data Augmentation Algorithms for Deep Learning-based Image Classification Tasks
Knowledge and Information Systems (KAIS), 2022
Z. Yang
Richard Sinnott
James Bailey
Qiuhong Ke
210
55
0
14 Jun 2022
Benign Overfitting in Classification: Provably Counter Label Noise with Larger Models
International Conference on Learning Representations (ICLR), 2022
Kaiyue Wen
Jiaye Teng
J.N. Zhang
NoLa
117
5
0
01 Jun 2022
A General Multiple Data Augmentation Based Framework for Training Deep Neural Networks
IEEE International Joint Conference on Neural Network (IJCNN), 2022
Bin Hu
Yu Sun
•. A. K. Qin
AI4CE
129
0
0
29 May 2022
Generalized Knowledge Distillation via Relationship Matching
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022
Han-Jia Ye
Su Lu
De-Chuan Zhan
FedML
155
25
0
04 May 2022
Selective Cross-Task Distillation
Su Lu
Han-Jia Ye
De-Chuan Zhan
216
1
0
25 Apr 2022
Object Localization under Single Coarse Point Supervision
Computer Vision and Pattern Recognition (CVPR), 2022
Xuehui Yu
Pengfei Chen
Di Wu
Najmul Hassan
Guorong Li
Junchi Yan
Humphrey Shi
QiXiang Ye
Zhenjun Han
3DPC
182
34
0
17 Mar 2022
Reducing Flipping Errors in Deep Neural Networks
AAAI Conference on Artificial Intelligence (AAAI), 2022
Xiang Deng
Yun Xiao
Bo Long
Zhongfei Zhang
AAML
112
4
0
16 Mar 2022
On the benefits of knowledge distillation for adversarial robustness
Javier Maroto
Guillermo Ortiz-Jiménez
P. Frossard
AAML
FedML
130
27
0
14 Mar 2022
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
International Conference on Machine Learning (ICML), 2022
Mitchell Wortsman
Gabriel Ilharco
S. Gadre
Rebecca Roelofs
Raphael Gontijo-Lopes
...
Hongseok Namkoong
Ali Farhadi
Y. Carmon
Simon Kornblith
Ludwig Schmidt
MoMe
578
1,251
1
10 Mar 2022
How many Observations are Enough? Knowledge Distillation for Trajectory Forecasting
Computer Vision and Pattern Recognition (CVPR), 2022
Alessio Monti
Angelo Porrello
Simone Calderara
Pasquale Coscia
Lamberto Ballan
Rita Cucchiara
137
68
0
09 Mar 2022
Jointly Learning Knowledge Embedding and Neighborhood Consensus with Relational Knowledge Distillation for Entity Alignment
Xinhang Li
Yong Zhang
Chunxiao Xing
157
6
0
25 Jan 2022
Adaptive Image Inpainting
Maitreya Suin
Kuldeep Purohit
A. N. Rajagopalan
146
0
0
01 Jan 2022
Multi-Modality Distillation via Learning the teacher's modality-level Gram Matrix
Peng Liu
123
0
0
21 Dec 2021
Data Efficient Language-supervised Zero-shot Recognition with Optimal Transport Distillation
Bichen Wu
Rui Cheng
Peizhao Zhang
Tianren Gao
Peter Vajda
Joseph E. Gonzalez
VLM
242
53
0
17 Dec 2021
Constrained Mean Shift Using Distant Yet Related Neighbors for Representation Learning
K. Navaneet
Soroush Abbasi Koohpayegani
Ajinkya Tejankar
Kossar Pourahmadi
Akshayvarun Subramanya
Hamed Pirsiavash
SSL
197
8
0
08 Dec 2021
Sliced Recursive Transformer
European Conference on Computer Vision (ECCV), 2021
Zhiqiang Shen
Zechun Liu
Eric P. Xing
ViT
183
27
0
09 Nov 2021
Constrained Mean Shift for Representation Learning
Ajinkya Tejankar
Soroush Abbasi Koohpayegani
Hamed Pirsiavash
SSL
136
0
0
19 Oct 2021
Improving Binary Neural Networks through Fully Utilizing Latent Weights
Weixiang Xu
Qiang Chen
Xiangyu He
Peisong Wang
Jian Cheng
MQ
154
6
0
12 Oct 2021
Knowledge Distillation with Noisy Labels for Natural Language Understanding
Shivendra Bhardwaj
Abbas Ghaddar
Ahmad Rashid
Khalil Bibi
Cheng-huan Li
A. Ghodsi
Philippe Langlais
Mehdi Rezagholizadeh
135
2
0
21 Sep 2021
Text is Text, No Matter What: Unifying Text Recognition using Knowledge Distillation
IEEE International Conference on Computer Vision (ICCV), 2021
A. Bhunia
Aneeshan Sain
Pinaki Nath Chowdhury
Yi-Zhe Song
161
31
0
26 Jul 2021
Novel Visual Category Discovery with Dual Ranking Statistics and Mutual Knowledge Distillation
Neural Information Processing Systems (NeurIPS), 2021
Bingchen Zhao
Kai Han
198
133
0
07 Jul 2021
1
2
3
Next