ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.08453
  4. Cited By
MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet
  without Tricks

MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks

17 September 2020
Zhiqiang Shen
Marios Savvides
ArXivPDFHTML

Papers citing "MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks"

17 / 17 papers shown
Title
Efficient Knowledge Distillation: Empowering Small Language Models with
  Teacher Model Insights
Efficient Knowledge Distillation: Empowering Small Language Models with Teacher Model Insights
Mohamad Ballout
U. Krumnack
Gunther Heidemann
Kai-Uwe Kühnberger
35
2
0
19 Sep 2024
Understanding the Effects of Projectors in Knowledge Distillation
Understanding the Effects of Projectors in Knowledge Distillation
Yudong Chen
Sen Wang
Jiajun Liu
Xuwei Xu
Frank de Hoog
Brano Kusy
Zi Huang
26
0
0
26 Oct 2023
Rethinking Soft Label in Label Distribution Learning Perspective
Rethinking Soft Label in Label Distribution Learning Perspective
Seungbum Hong
Jihun Yoon
Bogyu Park
Min-Kook Choi
31
0
0
31 Jan 2023
Rethinking Out-of-Distribution Detection From a Human-Centric
  Perspective
Rethinking Out-of-Distribution Detection From a Human-Centric Perspective
Yao Zhu
YueFeng Chen
Xiaodan Li
Rong Zhang
Hui Xue
Xiang Tian
Rongxin Jiang
Bo Zheng
Yao-wu Chen
OODD
23
7
0
30 Nov 2022
Progressive Learning without Forgetting
Progressive Learning without Forgetting
Tao Feng
Hangjie Yuan
Mang Wang
Ziyuan Huang
Ang Bian
Jianzhou Zhang
CLL
KELM
42
4
0
28 Nov 2022
Join the High Accuracy Club on ImageNet with A Binary Neural Network
  Ticket
Join the High Accuracy Club on ImageNet with A Binary Neural Network Ticket
Nianhui Guo
Joseph Bethge
Christoph Meinel
Haojin Yang
MQ
24
19
0
23 Nov 2022
Knowledge Distillation of Transformer-based Language Models Revisited
Knowledge Distillation of Transformer-based Language Models Revisited
Chengqiang Lu
Jianwei Zhang
Yunfei Chu
Zhengyu Chen
Jingren Zhou
Fei Wu
Haiqing Chen
Hongxia Yang
VLM
25
10
0
29 Jun 2022
TransBoost: Improving the Best ImageNet Performance using Deep
  Transduction
TransBoost: Improving the Best ImageNet Performance using Deep Transduction
Omer Belhasin
Guy Bar-Shalom
Ran El-Yaniv
ViT
24
3
0
26 May 2022
Cross-modal Contrastive Distillation for Instructional Activity
  Anticipation
Cross-modal Contrastive Distillation for Instructional Activity Anticipation
Zhengyuan Yang
Jingen Liu
Jing-ling Huang
Xiaodong He
Tao Mei
Chenliang Xu
Jiebo Luo
25
6
0
18 Jan 2022
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Weixin Xu
Zipeng Feng
Shuangkang Fang
Song Yuan
Yi Yang
Shuchang Zhou
MQ
18
1
0
01 Nov 2021
Knowledge distillation: A good teacher is patient and consistent
Knowledge distillation: A good teacher is patient and consistent
Lucas Beyer
Xiaohua Zhai
Amelie Royer
L. Markeeva
Rohan Anil
Alexander Kolesnikov
VLM
35
287
0
09 Jun 2021
BossNAS: Exploring Hybrid CNN-transformers with Block-wisely
  Self-supervised Neural Architecture Search
BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search
Changlin Li
Tao Tang
Guangrun Wang
Jiefeng Peng
Bing Wang
Xiaodan Liang
Xiaojun Chang
ViT
46
105
0
23 Mar 2021
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
  Networks via Guided Distribution Calibration
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration
Zhiqiang Shen
Zechun Liu
Jie Qin
Lei Huang
Kwang-Ting Cheng
Marios Savvides
UQCV
SSL
MQ
248
22
0
17 Feb 2021
Concept Generalization in Visual Representation Learning
Concept Generalization in Visual Representation Learning
Mert Bulent Sariyildiz
Yannis Kalantidis
Diane Larlus
Alahari Karteek
SSL
23
50
0
10 Dec 2020
Joint Multi-Dimension Pruning via Numerical Gradient Update
Joint Multi-Dimension Pruning via Numerical Gradient Update
Zechun Liu
X. Zhang
Zhiqiang Shen
Zhe Li
Yichen Wei
Kwang-Ting Cheng
Jian Sun
41
18
0
18 May 2020
Knowledge Distillation by On-the-Fly Native Ensemble
Knowledge Distillation by On-the-Fly Native Ensemble
Xu Lan
Xiatian Zhu
S. Gong
192
473
0
12 Jun 2018
Neural Architecture Search with Reinforcement Learning
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
264
5,326
0
05 Nov 2016
1