ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.08094
  4. Cited By
Be Your Own Teacher: Improve the Performance of Convolutional Neural
  Networks via Self Distillation

Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation

17 May 2019
Linfeng Zhang
Jiebo Song
Anni Gao
Jingwei Chen
Chenglong Bao
Kaisheng Ma
    FedML
ArXivPDFHTML

Papers citing "Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation"

50 / 143 papers shown
Title
From Knowledge Distillation to Self-Knowledge Distillation: A Unified
  Approach with Normalized Loss and Customized Soft Labels
From Knowledge Distillation to Self-Knowledge Distillation: A Unified Approach with Normalized Loss and Customized Soft Labels
Zhendong Yang
Ailing Zeng
Zhe Li
Tianke Zhang
Chun Yuan
Yu Li
29
72
0
23 Mar 2023
MV-MR: multi-views and multi-representations for self-supervised
  learning and knowledge distillation
MV-MR: multi-views and multi-representations for self-supervised learning and knowledge distillation
Vitaliy Kinakh
M. Drozdova
Slava Voloshynovskiy
40
1
0
21 Mar 2023
High-level Feature Guided Decoding for Semantic Segmentation
High-level Feature Guided Decoding for Semantic Segmentation
Ye Huang
Di Kang
Shenghua Gao
Wen Li
Lixin Duan
23
0
0
15 Mar 2023
Distilling Calibrated Student from an Uncalibrated Teacher
Distilling Calibrated Student from an Uncalibrated Teacher
Ishan Mishra
Sethu Vamsi Krishna
Deepak Mishra
FedML
34
2
0
22 Feb 2023
Revisiting Intermediate Layer Distillation for Compressing Language
  Models: An Overfitting Perspective
Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective
Jongwoo Ko
Seungjoon Park
Minchan Jeong
S. Hong
Euijai Ahn
Duhyeuk Chang
Se-Young Yun
23
6
0
03 Feb 2023
Knowledge Distillation on Graphs: A Survey
Knowledge Distillation on Graphs: A Survey
Yijun Tian
Shichao Pei
Xiangliang Zhang
Chuxu Zhang
Nitesh V. Chawla
21
28
0
01 Feb 2023
Dataset Distillation: A Comprehensive Review
Dataset Distillation: A Comprehensive Review
Ruonan Yu
Songhua Liu
Xinchao Wang
DD
53
121
0
17 Jan 2023
Guided Hybrid Quantization for Object detection in Multimodal Remote
  Sensing Imagery via One-to-one Self-teaching
Guided Hybrid Quantization for Object detection in Multimodal Remote Sensing Imagery via One-to-one Self-teaching
Jiaqing Zhang
Jie Lei
Weiying Xie
Yunsong Li
Wenxuan Wang
MQ
27
18
0
31 Dec 2022
Self Meta Pseudo Labels: Meta Pseudo Labels Without The Teacher
Self Meta Pseudo Labels: Meta Pseudo Labels Without The Teacher
Kei-Sing Ng
Qingchen Wang
VLM
21
1
0
27 Dec 2022
Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image
  Transformers Help 3D Representation Learning?
Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image Transformers Help 3D Representation Learning?
Runpei Dong
Zekun Qi
Linfeng Zhang
Junbo Zhang
Jian‐Yuan Sun
Zheng Ge
Li Yi
Kaisheng Ma
ViT
3DPC
29
84
0
16 Dec 2022
Responsible Active Learning via Human-in-the-loop Peer Study
Responsible Active Learning via Human-in-the-loop Peer Study
Yu Cao
Jingya Wang
Baosheng Yu
Dacheng Tao
25
0
0
24 Nov 2022
AI-KD: Adversarial learning and Implicit regularization for
  self-Knowledge Distillation
AI-KD: Adversarial learning and Implicit regularization for self-Knowledge Distillation
Hyungmin Kim
Sungho Suh
Sunghyun Baek
Daehwan Kim
Daun Jeong
Hansang Cho
Junmo Kim
22
5
0
20 Nov 2022
Structured Knowledge Distillation Towards Efficient and Compact
  Multi-View 3D Detection
Structured Knowledge Distillation Towards Efficient and Compact Multi-View 3D Detection
Linfeng Zhang
Yukang Shi
Hung-Shuo Tai
Zhipeng Zhang
Yuan He
Ke Wang
Kaisheng Ma
18
2
0
14 Nov 2022
SADT: Combining Sharpness-Aware Minimization with Self-Distillation for
  Improved Model Generalization
SADT: Combining Sharpness-Aware Minimization with Self-Distillation for Improved Model Generalization
Masud An Nur Islam Fahim
Jani Boutellier
37
0
0
01 Nov 2022
Teacher-Student Architecture for Knowledge Learning: A Survey
Teacher-Student Architecture for Knowledge Learning: A Survey
Chengming Hu
Xuan Li
Dan Liu
Xi Chen
Ju Wang
Xue Liu
20
35
0
28 Oct 2022
Online Cross-Layer Knowledge Distillation on Graph Neural Networks with
  Deep Supervision
Online Cross-Layer Knowledge Distillation on Graph Neural Networks with Deep Supervision
Jiongyu Guo
Defang Chen
Can Wang
16
3
0
25 Oct 2022
Respecting Transfer Gap in Knowledge Distillation
Respecting Transfer Gap in Knowledge Distillation
Yulei Niu
Long Chen
Chan Zhou
Hanwang Zhang
26
23
0
23 Oct 2022
Large Language Models Can Self-Improve
Large Language Models Can Self-Improve
Jiaxin Huang
S. Gu
Le Hou
Yuexin Wu
Xuezhi Wang
Hongkun Yu
Jiawei Han
ReLM
AI4MH
LRM
47
564
0
20 Oct 2022
Approximating Continuous Convolutions for Deep Network Compression
Approximating Continuous Convolutions for Deep Network Compression
Theo W. Costain
V. Prisacariu
30
0
0
17 Oct 2022
APSNet: Attention Based Point Cloud Sampling
APSNet: Attention Based Point Cloud Sampling
Yang Ye
Xiulong Yang
Shihao Ji
3DPC
34
6
0
11 Oct 2022
Stimulative Training of Residual Networks: A Social Psychology
  Perspective of Loafing
Stimulative Training of Residual Networks: A Social Psychology Perspective of Loafing
Peng Ye
Shengji Tang
Baopu Li
Tao Chen
Wanli Ouyang
31
13
0
09 Oct 2022
Teaching Yourself: Graph Self-Distillation on Neighborhood for Node
  Classification
Teaching Yourself: Graph Self-Distillation on Neighborhood for Node Classification
Lirong Wu
Jun-Xiong Xia
Haitao Lin
Zhangyang Gao
Zicheng Liu
Guojiang Zhao
Stan Z. Li
61
6
0
05 Oct 2022
TrackletMapper: Ground Surface Segmentation and Mapping from Traffic
  Participant Trajectories
TrackletMapper: Ground Surface Segmentation and Mapping from Traffic Participant Trajectories
Jannik Zürn
Sebastian Weber
Wolfram Burgard
32
5
0
12 Sep 2022
ALADIN: Distilling Fine-grained Alignment Scores for Efficient
  Image-Text Matching and Retrieval
ALADIN: Distilling Fine-grained Alignment Scores for Efficient Image-Text Matching and Retrieval
Nicola Messina
Matteo Stefanini
Marcella Cornia
Lorenzo Baraldi
Fabrizio Falchi
Giuseppe Amato
Rita Cucchiara
VLM
16
21
0
29 Jul 2022
ProSelfLC: Progressive Self Label Correction Towards A Low-Temperature
  Entropy State
ProSelfLC: Progressive Self Label Correction Towards A Low-Temperature Entropy State
Xinshao Wang
Yang Hua
Elyor Kodirov
S. Mukherjee
David A. Clifton
N. Robertson
19
6
0
30 Jun 2022
Teach me how to Interpolate a Myriad of Embeddings
Teach me how to Interpolate a Myriad of Embeddings
Shashanka Venkataramanan
Ewa Kijak
Laurent Amsaleg
Yannis Avrithis
40
2
0
29 Jun 2022
Variational Distillation for Multi-View Learning
Variational Distillation for Multi-View Learning
Xudong Tian
Zhizhong Zhang
Cong Wang
Wensheng Zhang
Yanyun Qu
Lizhuang Ma
Zongze Wu
Yuan Xie
Dacheng Tao
26
5
0
20 Jun 2022
Improving Generalization of Metric Learning via Listwise
  Self-distillation
Improving Generalization of Metric Learning via Listwise Self-distillation
Zelong Zeng
Fan Yang
Zhilin Wang
Shiníchi Satoh
FedML
35
1
0
17 Jun 2022
Multi scale Feature Extraction and Fusion for Online Knowledge
  Distillation
Multi scale Feature Extraction and Fusion for Online Knowledge Distillation
Panpan Zou
Yinglei Teng
Tao Niu
27
3
0
16 Jun 2022
Confidence-aware Self-Semantic Distillation on Knowledge Graph Embedding
Confidence-aware Self-Semantic Distillation on Knowledge Graph Embedding
Yichen Liu
C. Wang
Defang Chen
Zhehui Zhou
Yan Feng
Chun-Yen Chen
13
0
0
07 Jun 2022
Vanilla Feature Distillation for Improving the Accuracy-Robustness
  Trade-Off in Adversarial Training
Vanilla Feature Distillation for Improving the Accuracy-Robustness Trade-Off in Adversarial Training
Guodong Cao
Zhibo Wang
Xiaowei Dong
Zhifei Zhang
Hengchang Guo
Zhan Qin
Kui Ren
AAML
27
1
0
05 Jun 2022
Guided Deep Metric Learning
Guided Deep Metric Learning
Jorge Gonzalez-Zapata
Iván Reyes-Amezcua
Daniel Flores-Araiza
M. Mendez-Ruiz
G. Ochoa-Ruiz
Andres Mendez-Vazquez
FedML
24
5
0
04 Jun 2022
A General Multiple Data Augmentation Based Framework for Training Deep
  Neural Networks
A General Multiple Data Augmentation Based Framework for Training Deep Neural Networks
Bin Hu
Yu Sun
•. A. K. Qin
AI4CE
28
0
0
29 May 2022
Boosting Multi-Label Image Classification with Complementary Parallel
  Self-Distillation
Boosting Multi-Label Image Classification with Complementary Parallel Self-Distillation
Jiazhi Xu
Sheng Huang
Fengtao Zhou
Luwen Huangfu
D. Zeng
Bo Liu
17
13
0
23 May 2022
Masterful: A Training Platform for Computer Vision Models
Masterful: A Training Platform for Computer Vision Models
S. Wookey
Yaoshiang Ho
Thomas D. Rikert
Juan David Gil Lopez
Juan Manuel Munoz Beancur
...
Ray Tawil
Aaron Sabin
Jack Lynch
Travis Harper
Nikhil Gajendrakumar
VLM
18
1
0
21 May 2022
Spot-adaptive Knowledge Distillation
Spot-adaptive Knowledge Distillation
Jie Song
Ying Chen
Jingwen Ye
Mingli Song
20
72
0
05 May 2022
Contrastive Learning for Improving ASR Robustness in Spoken Language
  Understanding
Contrastive Learning for Improving ASR Robustness in Spoken Language Understanding
Yanfeng Chang
Yun-Nung Chen
22
9
0
02 May 2022
Spatial Likelihood Voting with Self-Knowledge Distillation for Weakly
  Supervised Object Detection
Spatial Likelihood Voting with Self-Knowledge Distillation for Weakly Supervised Object Detection
Ze Chen
Zhihang Fu
Jianqiang Huang
Mingyuan Tao
Rongxin Jiang
Xiang Tian
Yao-wu Chen
Xiansheng Hua
WSOD
22
4
0
14 Apr 2022
Localization Distillation for Object Detection
Localization Distillation for Object Detection
Zhaohui Zheng
Rongguang Ye
Ping Wang
Dongwei Ren
Jun Wang
W. Zuo
Ming-Ming Cheng
21
64
0
12 Apr 2022
Decompositional Generation Process for Instance-Dependent Partial Label
  Learning
Decompositional Generation Process for Instance-Dependent Partial Label Learning
Congyu Qiao
Ning Xu
Xin Geng
126
75
0
08 Apr 2022
Bimodal Distributed Binarized Neural Networks
Bimodal Distributed Binarized Neural Networks
T. Rozen
Moshe Kimhi
Brian Chmiel
A. Mendelson
Chaim Baskin
MQ
41
4
0
05 Apr 2022
Fast Real-time Personalized Speech Enhancement: End-to-End Enhancement
  Network (E3Net) and Knowledge Distillation
Fast Real-time Personalized Speech Enhancement: End-to-End Enhancement Network (E3Net) and Knowledge Distillation
Manthan Thakker
Sefik Emre Eskimez
Takuya Yoshioka
Huaming Wang
14
28
0
02 Apr 2022
Self-distillation Augmented Masked Autoencoders for Histopathological
  Image Classification
Self-distillation Augmented Masked Autoencoders for Histopathological Image Classification
Yang Luo
Zhineng Chen
Shengtian Zhou
Xieping Gao
31
1
0
31 Mar 2022
Self-Distillation from the Last Mini-Batch for Consistency
  Regularization
Self-Distillation from the Last Mini-Batch for Consistency Regularization
Yiqing Shen
Liwu Xu
Yuzhe Yang
Yaqian Li
Yandong Guo
15
60
0
30 Mar 2022
Linking Emergent and Natural Languages via Corpus Transfer
Linking Emergent and Natural Languages via Corpus Transfer
Shunyu Yao
Mo Yu
Yang Zhang
Karthik Narasimhan
J. Tenenbaum
Chuang Gan
24
15
0
24 Mar 2022
Multitask Emotion Recognition Model with Knowledge Distillation and Task
  Discriminator
Multitask Emotion Recognition Model with Knowledge Distillation and Task Discriminator
Euiseok Jeong
Geesung Oh
Sejoon Lim
CVBM
17
7
0
24 Mar 2022
Reducing Flipping Errors in Deep Neural Networks
Reducing Flipping Errors in Deep Neural Networks
Xiang Deng
Yun Xiao
Bo Long
Zhongfei Zhang
AAML
30
3
0
16 Mar 2022
Wavelet Knowledge Distillation: Towards Efficient Image-to-Image
  Translation
Wavelet Knowledge Distillation: Towards Efficient Image-to-Image Translation
Linfeng Zhang
Xin Chen
Xiaobing Tu
Pengfei Wan
N. Xu
Kaisheng Ma
16
62
0
12 Mar 2022
Better Supervisory Signals by Observing Learning Paths
Better Supervisory Signals by Observing Learning Paths
Yi Ren
Shangmin Guo
Danica J. Sutherland
30
21
0
04 Mar 2022
Adaptive Discriminative Regularization for Visual Classification
Adaptive Discriminative Regularization for Visual Classification
Qingsong Zhao
Yi Wang
Shuguang Dou
Chen Gong
Yin Wang
Cairong Zhao
18
0
0
02 Mar 2022
Previous
123
Next